An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
The Classroom, Board Room, Chat Room, and Court Room: School Computers at the Crossroads.
ERIC Educational Resources Information Center
Stewart, Michael
2000-01-01
In schools' efforts to maximize technology's benefits, ethical considerations have often taken a back seat. Computer misuse is growing exponentially and assuming many forms: unauthorized data access, hacking, piracy, information theft, fraud, virus creation, harassment, defamation, and discrimination. Integrated-learning activities will help…
Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors
2015-03-26
methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods
An Approach to Effortless Construction of Program Animations
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel; Pareja-Flores, Cristobal; Urquiza-Fuentes, Jaime
2008-01-01
Program animation systems have not been as widely adopted by computer science educators as we might expect from the firm belief that they can help in enhancing computer science education. One of the most notable obstacles to their adoption is the considerable effort that the production of program animations represents for the instructor. We…
Using artificial intelligence to control fluid flow computations
NASA Technical Reports Server (NTRS)
Gelsey, Andrew
1992-01-01
Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.
London, Nir; Ambroggio, Xavier
2014-02-01
Computational protein design efforts aim to create novel proteins and functions in an automated manner and, in the process, these efforts shed light on the factors shaping natural proteins. The focus of these efforts has progressed from the interior of proteins to their surface and the design of functions, such as binding or catalysis. Here we examine progress in the development of robust methods for the computational design of non-natural interactions between proteins and molecular targets such as other proteins or small molecules. This problem is referred to as the de novo computational design of interactions. Recent successful efforts in de novo enzyme design and the de novo design of protein-protein interactions open a path towards solving this problem. We examine the common themes in these efforts, and review recent studies aimed at understanding the nature of successes and failures in the de novo computational design of interactions. While several approaches culminated in success, the use of a well-defined structural model for a specific binding interaction in particular has emerged as a key strategy for a successful design, and is therefore reviewed with special consideration. Copyright © 2013 Elsevier Inc. All rights reserved.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions.
Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros
2014-06-25
The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions
2014-01-01
Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802
Automated Boundary Conditions for Wind Tunnel Simulations
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2018-01-01
Computational fluid dynamic (CFD) simulations of models tested in wind tunnels require a high level of fidelity and accuracy particularly for the purposes of CFD validation efforts. Considerable effort is required to ensure the proper characterization of both the physical geometry of the wind tunnel and recreating the correct flow conditions inside the wind tunnel. The typical trial-and-error effort used for determining the boundary condition values for a particular tunnel configuration are time and computer resource intensive. This paper describes a method for calculating and updating the back pressure boundary condition in wind tunnel simulations by using a proportional-integral-derivative controller. The controller methodology and equations are discussed, and simulations using the controller to set a tunnel Mach number in the NASA Langley 14- by 22-Foot Subsonic Tunnel are demonstrated.
Design Considerations for Computer-Based Interactive Map Display Systems
1979-02-01
11 Five Dimensions for Map Display System Options . . . . . . . . . . . . . . . 12 Summary of...most advanced and exotic technologies- space , optical, computer, and graphic pro- duction; the focusing of vast organizational efforts; and the results...Information retrieval: "Where are all the radar sites in sector 12 ?," "What’s the name of this hill?," "Where’s the hill named B243?" Information storage
Culturally Responsive Computing: A Theory Revisited
ERIC Educational Resources Information Center
Scott, Kimberly A.; Sheridan, Kimberly M.; Clark, Kevin
2015-01-01
Despite multiple efforts and considerable funding, historically marginalized groups (e.g., racial minorities and women) continue not to enter or persist in the most lucrative of fields--technology. Understanding the potency of culturally responsive teaching (CRT), some technology-enrichment programs modified CRP principles to establish a…
Space-Time Fusion Under Error in Computer Model Output: An Application to Modeling Air Quality
In the last two decades a considerable amount of research effort has been devoted to modeling air quality with public health objectives. These objectives include regulatory activities such as setting standards along with assessing the relationship between exposure to air pollutan...
Benchmark problems and solutions
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.
1995-01-01
The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.
Many Masses on One Stroke:. Economic Computation of Quark Propagators
NASA Astrophysics Data System (ADS)
Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus
The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.
Group implicit concurrent algorithms in nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Ortiz, M.; Sotelino, E. D.
1989-01-01
During the 70's and 80's, considerable effort was devoted to developing efficient and reliable time stepping procedures for transient structural analysis. Mathematically, the equations governing this type of problems are generally stiff, i.e., they exhibit a wide spectrum in the linear range. The algorithms best suited to this type of applications are those which accurately integrate the low frequency content of the response without necessitating the resolution of the high frequency modes. This means that the algorithms must be unconditionally stable, which in turn rules out explicit integration. The most exciting possibility in the algorithms development area in recent years has been the advent of parallel computers with multiprocessing capabilities. So, this work is mainly concerned with the development of parallel algorithms in the area of structural dynamics. A primary objective is to devise unconditionally stable and accurate time stepping procedures which lend themselves to an efficient implementation in concurrent machines. Some features of the new computer architecture are summarized. A brief survey of current efforts in the area is presented. A new class of concurrent procedures, or Group Implicit algorithms is introduced and analyzed. The numerical simulation shows that GI algorithms hold considerable promise for application in coarse grain as well as medium grain parallel computers.
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Mcewan, S. D.; Spry, A. J.
1985-01-01
Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.
Miga, Michael I
2016-01-01
With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.
Neurocomputational mechanisms underlying subjective valuation of effort costs
Giehl, Kathrin; Sillence, Annie
2017-01-01
In everyday life, we have to decide whether it is worth exerting effort to obtain rewards. Effort can be experienced in different domains, with some tasks requiring significant cognitive demand and others being more physically effortful. The motivation to exert effort for reward is highly subjective and varies considerably across the different domains of behaviour. However, very little is known about the computational or neural basis of how different effort costs are subjectively weighed against rewards. Is there a common, domain-general system of brain areas that evaluates all costs and benefits? Here, we used computational modelling and functional magnetic resonance imaging (fMRI) to examine the mechanisms underlying value processing in both the cognitive and physical domains. Participants were trained on two novel tasks that parametrically varied either cognitive or physical effort. During fMRI, participants indicated their preferences between a fixed low-effort/low-reward option and a variable higher-effort/higher-reward offer for each effort domain. Critically, reward devaluation by both cognitive and physical effort was subserved by a common network of areas, including the dorsomedial and dorsolateral prefrontal cortex, the intraparietal sulcus, and the anterior insula. Activity within these domain-general areas also covaried negatively with reward and positively with effort, suggesting an integration of these parameters within these areas. Additionally, the amygdala appeared to play a unique, domain-specific role in processing the value of rewards associated with cognitive effort. These results are the first to reveal the neurocomputational mechanisms underlying subjective cost–benefit valuation across different domains of effort and provide insight into the multidimensional nature of motivation. PMID:28234892
Understanding and enhancing user acceptance of computer technology
NASA Technical Reports Server (NTRS)
Rouse, William B.; Morris, Nancy M.
1986-01-01
Technology-driven efforts to implement computer technology often encounter problems due to lack of acceptance or begrudging acceptance of the personnel involved. It is argued that individuals' acceptance of automation, in terms of either computerization or computer aiding, is heavily influenced by their perceptions of the impact of the automation on their discretion in performing their jobs. It is suggested that desired levels of discretion reflect needs to feel in control and achieve self-satisfaction in task performance, as well as perceptions of inadequacies of computer technology. Discussion of these factors leads to a structured set of considerations for performing front-end analysis, deciding what to automate, and implementing the resulting changes.
ERIC Educational Resources Information Center
Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Oliva, Doretta; Montironi, Gianluigi
2004-01-01
The use of microswitches has been considered a crucial strategy to help individuals with extensive multiple disabilities overcome passivity and achieve control of environmental stimulation (Crawford & Schuster, 1993; Gutowski, 1996; Ko, McConachie, & Jolleff, 1998). In recent years, considerable efforts have been made to extend the evaluation of…
A structure adapted multipole method for electrostatic interactions in protein dynamics
NASA Astrophysics Data System (ADS)
Niedermeier, Christoph; Tavan, Paul
1994-07-01
We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.
Computational design of RNAs with complex energy landscapes.
Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F
2013-12-01
RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.
Computational Control Workstation: Users' perspectives
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Straube, Timothy M.; Tave, Jeffrey S.
1993-01-01
A Workstation has been designed and constructed for rapidly simulating motions of rigid and elastic multibody systems. We examine the Workstation from the point of view of analysts who use the machine in an industrial setting. Two aspects of the device distinguish it from other simulation programs. First, one uses a series of windows and menus on a computer terminal, together with a keyboard and mouse, to provide a mathematical and geometrical description of the system under consideration. The second hallmark is a facility for animating simulation results. An assessment of the amount of effort required to numerically describe a system to the Workstation is made by comparing the process to that used with other multibody software. The apparatus for displaying results as a motion picture is critiqued as well. In an effort to establish confidence in the algorithms that derive, encode, and solve equations of motion, simulation results from the Workstation are compared to answers obtained with other multibody programs. Our study includes measurements of computational speed.
Automating the parallel processing of fluid and structural dynamics calculations
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Cole, Gary L.
1987-01-01
The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilities to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.
The development of a virtual camera system for astronaut-rover planetary exploration.
Platt, Donald W; Boy, Guy A
2012-01-01
A virtual assistant is being developed for use by astronauts as they use rovers to explore the surface of other planets. This interactive database, called the Virtual Camera (VC), is an interactive database that allows the user to have better situational awareness for exploration. It can be used for training, data analysis and augmentation of actual surface exploration. This paper describes the development efforts and Human-Computer Interaction considerations for implementing a first-generation VC on a tablet mobile computer device. Scenarios for use will be presented. Evaluation and success criteria such as efficiency in terms of processing time and precision situational awareness, learnability, usability, and robustness will also be presented. Initial testing and the impact of HCI design considerations of manipulation and improvement in situational awareness using a prototype VC will be discussed.
Power and Energy Considerations at Forward Operating Bases (FOBs)
2010-06-16
systems • Anticipated additional plug loads by users – Personal Computers and Gaming Devices – Coffee Pots – Refrigerators – Lights – Personal Heaters...effort was made to account for the significant amount of equipment that consumes power not on the unit’s MTOE (printers, plotters, coffee pots, etc...50 Warfighters including billeting, kitchen, laundry, shower, latrines, and new wastewater treatment system Capability/impact: Compact, lightweight
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung
2017-01-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
Computational methods in metabolic engineering for strain design.
Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L
2015-08-01
Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
A modification in the technique of computing average lengths from the scales of fishes
Van Oosten, John
1953-01-01
In virtually all the studies that employ scales, otollths, or bony structures to obtain the growth history of fishes, it has been the custom to compute lengths for each individual fish and from these data obtain the average growth rates for any particular group. This method involves a considerable amount of mathematical manipulation, time, and effort. Theoretically it should be possible to obtain the same information simply by averaging the scale measurements for each year of life and the length of the fish employed and computing the average lengths from these data. This method would eliminate all calculations for individual fish. Although Van Oosten (1929: 338) pointed out many years ago the validity of this method of computation, his statements apparently have been overlooked by subsequent investigators.
Multimedia architectures: from desktop systems to portable appliances
NASA Astrophysics Data System (ADS)
Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.
1997-01-01
Future desktop and portable computing systems will have as their core an integrated multimedia system. Such a system will seamlessly combine digital video, digital audio, computer animation, text, and graphics. Furthermore, such a system will allow for mixed-media creation, dissemination, and interactive access in real time. Multimedia architectures that need to support these functions have traditionally required special display and processing units for the different media types. This approach tends to be expensive and is inefficient in its use of silicon. Furthermore, such media-specific processing units are unable to cope with the fluid nature of the multimedia market wherein the needs and standards are changing and system manufacturers may demand a single component media engine across a range of products. This constraint has led to a shift towards providing a single-component multimedia specific computing engine that can be integrated easily within desktop systems, tethered consumer appliances, or portable appliances. In this paper, we review some of the recent architectural efforts in developing integrated media systems. We primarily focus on two efforts, namely the evolution of multimedia-capable general purpose processors and a more recent effort in developing single component mixed media co-processors. Design considerations that could facilitate the migration of these technologies to a portable integrated media system also are presented.
Adaptive effort investment in cognitive and physical tasks: a neurocomputational model
Verguts, Tom; Vassena, Eliana; Silvetti, Massimo
2015-01-01
Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978
A Comparison of Three Theoretical Methods of Calculating Span Load Distribution on Swept Wings
NASA Technical Reports Server (NTRS)
VanDorn, Nicholas H.; DeYoung, John
1947-01-01
Three methods for calculating span load distribution, those developed by V.M Falkner, Wm. Mutterperl, and J. Weissinger, have been applied to five swept wings. The angles of sweep ranged from -45 degrees to +45 degrees. These methods were examined to establish their relative accuracy and case of application. Experimentally determined loadings were used as a basis for judging accuracy. For the convenience of the readers the computing forms and all information requisite to their application are included in appendixes. From the analysis it was found that the Weissinger method would be best suited to an over-all study of the effects of plan form on the span loading and associated characteristics of wings. The method gave good, but not best, accuracy and involved by far the least computing effort. The Falkner method gave the best accuracy but at a considerable expanse in computing effort and hence appeared to be most useful for a detailed study of a specific wing. The Mutterperl method offered no advantages in accuracy of facility over either of the other methods and hence is not recommended for use.
Aero-Structural Assessment of an Inflatable Aerodynamic Decelerator
NASA Technical Reports Server (NTRS)
Sheta, Essam F.; Venugopalan, Vinod; Tan, X. G.; Liever, Peter A.; Habchi, Sami D.
2010-01-01
NASA is conducting an Entry, Descent and Landing Systems Analysis (EDL-SA) Study to determine the key technology development projects that should be undertaken for enabling the landing of large payloads on Mars for both human and robotic missions. Inflatable Aerodynamic Decelerators (IADs) are one of the candidate technologies. A variety of EDL architectures are under consideration. The current effort is conducted for development and simulations of computational framework for inflatable structures.
Increasing reliability of Gauss-Kronrod quadrature by Eratosthenes' sieve method
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2001-04-01
The reliability of the local error estimates returned by the Gauss-Kronrod quadrature rules can be raised up to the theoretical 100% rate of success, under error estimate sharpening, provided a number of natural validating conditions are required. The self-validating scheme of the local error estimates, which is easy to implement and adds little supplementary computing effort, strengthens considerably the correctness of the decisions within the automatic adaptive quadrature.
Cost considerations in automating the library.
Bolef, D
1987-01-01
The purchase price of a computer and its software is but a part of the cost of any automated system. There are many additional costs, including one-time costs of terminals, printers, multiplexors, microcomputers, consultants, workstations and retrospective conversion, and ongoing costs of maintenance and maintenance contracts for the equipment and software, telecommunications, and supplies. This paper examines those costs in an effort to produce a more realistic picture of an automated system. PMID:3594021
Neutron skyshine calculations for the PDX tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, F.J.; Nigg, D.W.
1979-01-01
The Poloidal Divertor Experiment (PDX) at Princeton will be the first operating tokamak to require a substantial radiation shield. The PDX shielding includes a water-filled roof shield over the machine to reduce air scattering skyshine dose in the PDX control room and at the site boundary. During the design of this roof shield a unique method was developed to compute the neutron source emerging from the top of the roof shield for use in Monte Carlo skyshine calculations. The method is based on simple, one-dimensional calculations rather than multidimensional calculations, resulting in considerable savings in computer time and input preparationmore » effort. This method is described.« less
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Interoperability of Neuroscience Modeling Software
Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik
2009-01-01
Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374
Numerical simulation of helicopter engine plume in forward flight
NASA Technical Reports Server (NTRS)
Dimanlig, Arsenio C. B.; Vandam, Cornelis P.; Duque, Earl P. N.
1994-01-01
Flowfields around helicopters contain complex flow features such as large separated flow regions, vortices, shear layers, blown and suction surfaces and an inherently unsteady flow imposed by the rotor system. Another complicated feature of helicopters is their infrared signature. Typically, the aircraft's exhaust plume interacts with the rotor downwash, the fuselage's complicated flowfield, and the fuselage itself giving each aircraft a unique IR signature at given flight conditions. The goal of this project was to compute the flow about a realistic helicopter fuselage including the interaction of the engine air intakes and exhaust plume. The computations solve the Think-Layer Navier Stokes equations using overset type grids and in particular use the OVERFLOW code by Buning of NASA Ames. During this three month effort, an existing grid system of the Comanche Helicopter was to be modified to include the engine inlet and the hot engine exhaust. The engine exhaust was to be modeled as hot air exhaust. However, considerable changes in the fuselage geometry required a complete regriding of the surface and volume grids. The engine plume computations have been delayed to future efforts. The results of the current work consists of a complete regeneration of the surface and volume grids of the most recent Comanche fuselage along with a flowfield computation.
A comparative analysis of soft computing techniques for gene prediction.
Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand
2013-07-01
The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Engineering specification and system design for CAD/CAM of custom shoes: UMC project effort
NASA Technical Reports Server (NTRS)
Bao, Han P.
1990-01-01
Further experimentations were made to improve the design and fabrication techniques of the integrated sole. The sole design is shown to be related to the foot position requirements and the actual shape of the foot including presence of neurotropic ulcers or other infections. Factors for consideration were: heel pitch, balance line, and rigidity conditions of the foot. Machining considerations were also part of the design problem. Among these considerations, widths of each contour, tool motion, tool feed rate, depths of cut, and slopes of cut at the boundary were the key elements. The essential fabrication techniques evolved around the idea of machining a mold then, using quick-firm latex material, casting the sole through the mold. Two main mold materials were experimented with: plaster and wood. Plaster was very easy to machine and shape but could barely support the pressure in the hydraulic press required by the casting process. Wood was found to be quite effective in terms of relative cost, strength, and surface smoothness except for the problem of cutting against the fibers which could generate ragged surfaces. The programming efforts to convert the original dBase programs into C programs so that they could be executed on the SUN Computer at North Carolina State University are discussed.
NASA Astrophysics Data System (ADS)
Bouchpan-Lerust-Juéry, L.
2007-08-01
Current and next generation on-board computer systems tend to implement real-time embedded control applications (e.g. Attitude and Orbit Control Subsystem (AOCS), Packet Utililization Standard (PUS), spacecraft autonomy . . . ) which must meet high standards of Reliability and Predictability as well as Safety. All these requirements require a considerable amount of effort and cost for Space Sofware Industry. This paper, in a first part, presents a free Open Source integrated solution to develop RTAI applications from analysis, design, simulation and direct implementation using code generation based on Open Source and in its second part summarises this suggested approach, its results and the conclusion for further work.
Jun, Kyungtaek; Kim, Dongwook
2018-01-01
X-ray computed tomography has been studied in various fields. Considerable effort has been focused on reconstructing the projection image set from a rigid-type specimen. However, reconstruction of images projected from an object showing elastic motion has received minimal attention. In this paper, a mathematical solution to reconstructing the projection image set obtained from an object with specific elastic motions-periodically, regularly, and elliptically expanded or contracted specimens-is proposed. To reconstruct the projection image set from expanded or contracted specimens, methods are presented for detection of the sample's motion modes, mathematical rescaling of pixel values, and conversion of the projection angle for a common layer.
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A.; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world. PMID:27917107
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control.
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.
Robotic tape library system level testing at NSA: Present and planned
NASA Technical Reports Server (NTRS)
Shields, Michael F.
1994-01-01
In the present of declining Defense budgets, increased pressure has been placed on the DOD to utilize Commercial Off the Shelf (COTS) solutions to incrementally solve a wide variety of our computer processing requirements. With the rapid growth in processing power, significant expansion of high performance networking, and the increased complexity of applications data sets, the requirement for high performance, large capacity, reliable and secure, and most of all affordable robotic tape storage libraries has greatly increased. Additionally, the migration to a heterogeneous, distributed computing environment has further complicated the problem. With today's open system compute servers approaching yesterday's supercomputer capabilities, the need for affordable, reliable secure Mass Storage Systems (MSS) has taken on an ever increasing importance to our processing center's ability to satisfy operational mission requirements. To that end, NSA has established an in-house capability to acquire, test, and evaluate COTS products. Its goal is to qualify a set of COTS MSS libraries, thereby achieving a modicum of standardization for robotic tape libraries which can satisfy our low, medium, and high performance file and volume serving requirements. In addition, NSA has established relations with other Government Agencies to complete this in-house effort and to maximize our research, testing, and evaluation work. While the preponderance of the effort is focused at the high end of the storage ladder, considerable effort will be extended this year and next at the server class or mid range storage systems.
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
Separate valuation subsystems for delay and effort decision costs.
Prévost, Charlotte; Pessiglione, Mathias; Météreau, Elise; Cléry-Melin, Marie-Laure; Dreher, Jean-Claude
2010-10-20
Decision making consists of choosing among available options on the basis of a valuation of their potential costs and benefits. Most theoretical models of decision making in behavioral economics, psychology, and computer science propose that the desirability of outcomes expected from alternative options can be quantified by utility functions. These utility functions allow a decision maker to assign subjective values to each option under consideration by weighting the likely benefits and costs resulting from an action and to select the one with the highest subjective value. Here, we used model-based neuroimaging to test whether the human brain uses separate valuation systems for rewards (erotic stimuli) associated with different types of costs, namely, delay and effort. We show that humans devalue rewards associated with physical effort in a strikingly similar fashion to those they devalue that are associated with delays, and that a single computational model derived from economics theory can account for the behavior observed in both delay discounting and effort discounting. However, our neuroimaging data reveal that the human brain uses distinct valuation subsystems for different types of costs, reflecting in opposite fashion delayed reward and future energetic expenses. The ventral striatum and the ventromedial prefrontal cortex represent the increasing subjective value of delayed rewards, whereas a distinct network, composed of the anterior cingulate cortex and the anterior insula, represent the decreasing value of the effortful option, coding the expected expense of energy. Together, these data demonstrate that the valuation processes underlying different types of costs can be fractionated at the cerebral level.
Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows
NASA Astrophysics Data System (ADS)
Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan
2018-05-01
This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.
Automation of Shuttle Tile Inspection - Engineering methodology for Space Station
NASA Technical Reports Server (NTRS)
Wiskerchen, M. J.; Mollakarimi, C.
1987-01-01
The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.
Progress in computational toxicology.
Ekins, Sean
2014-01-01
Computational methods have been widely applied to toxicology across pharmaceutical, consumer product and environmental fields over the past decade. Progress in computational toxicology is now reviewed. A literature review was performed on computational models for hepatotoxicity (e.g. for drug-induced liver injury (DILI)), cardiotoxicity, renal toxicity and genotoxicity. In addition various publications have been highlighted that use machine learning methods. Several computational toxicology model datasets from past publications were used to compare Bayesian and Support Vector Machine (SVM) learning methods. The increasing amounts of data for defined toxicology endpoints have enabled machine learning models that have been increasingly used for predictions. It is shown that across many different models Bayesian and SVM perform similarly based on cross validation data. Considerable progress has been made in computational toxicology in a decade in both model development and availability of larger scale or 'big data' models. The future efforts in toxicology data generation will likely provide us with hundreds of thousands of compounds that are readily accessible for machine learning models. These models will cover relevant chemistry space for pharmaceutical, consumer product and environmental applications. Copyright © 2013 Elsevier Inc. All rights reserved.
Printed Arabic optical character segmentation
NASA Astrophysics Data System (ADS)
Mohammad, Khader; Ayyesh, Muna; Qaroush, Aziz; Tumar, Iyad
2015-03-01
A considerable progress in recognition techniques for many non-Arabic characters has been achieved. In contrary, few efforts have been put on the research of Arabic characters. In any Optical Character Recognition (OCR) system the segmentation step is usually the essential stage in which an extensive portion of processing is devoted and a considerable share of recognition errors is attributed. In this research, a novel segmentation approach for machine Arabic printed text with diacritics is proposed. The proposed method reduces computation, errors, gives a clear description for the sub-word and has advantages over using the skeleton approach in which the data and information of the character can be lost. Both of initial evaluation and testing of the proposed method have been developed using MATLAB and shows 98.7% promising results.
Hydrodynamic Analyses and Evaluation of New Fluid Film Bearing Concepts
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Dimofte, Florin
1998-01-01
Over the past several years, numerical and experimental investigations have been performed on a waved journal bearing. The research work was undertaken by Dr. Florin Dimofte, a Senior Research Associate in the Mechanical Engineering Department at the University of Toledo. Dr. Theo Keith, Distinguished University Professor in the Mechanical Engineering Department was the Technical Coordinator of the project. The wave journal bearing is a bearing with a slight but precise variation in its circular profile such that a waved profile is circumscribed on the inner bearing diameter. The profile has a wave amplitude that is equal to a fraction of the bearing clearance. Prior to this period of research on the wave bearing, computer codes were written and an experimental facility was established. During this period of research considerable effort was directed towards the study of the bearing's stability. The previously developed computer codes and the experimental facility were of critical importance in performing this stability research. A collection of papers and reports were written to describe the results of this work. The attached captures that effort and represents the research output during the grant period.
Shannon information, LMC complexity and Rényi entropies: a straightforward approach.
López-Ruiz, Ricardo
2005-04-01
The LMC complexity, an indicator of complexity based on a probabilistic description, is revisited. A straightforward approach allows us to establish the time evolution of this indicator in a near-equilibrium situation and gives us a new insight for interpreting the LMC complexity for a general non equilibrium system. Its relationship with the Rényi entropies is also explained. One of the advantages of this indicator is that its calculation does not require a considerable computational effort in many cases of physical and biological interest.
Acceleration of low order finite element computation with GPUs (Invited)
NASA Astrophysics Data System (ADS)
Knepley, M. G.
2010-12-01
Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.
Electron tubes for industrial applications
NASA Astrophysics Data System (ADS)
Gellert, Bernd
1994-05-01
This report reviews research and development efforts within the last years for vacuum electron tubes, in particular power grid tubes for industrial applications. Physical and chemical effects are discussed that determine the performance of todays devices. Due to the progress made in the fundamental understanding of materials and newly developed processes the reliability and reproducibility of power grid tubes could be improved considerably. Modern computer controlled manufacturing methods ensure a high reproducibility of production and continuous quality certification according to ISO 9001 guarantees future high quality standards. Some typical applications of these tubes are given as an example.
Guidance for human interface with artificial intelligence systems
NASA Technical Reports Server (NTRS)
Potter, Scott S.; Woods, David D.
1991-01-01
The beginning of a research effort to collect and integrate existing research findings about how to combine computer power and people is discussed, including problems and pitfalls as well as desirable features. The goal of the research is to develop guidance for the design of human interfaces with intelligent systems. Fault management tasks in NASA domains are the focus of the investigation. Research is being conducted to support the development of guidance for designers that will enable them to make human interface considerations into account during the creation of intelligent systems.
Design of a fault-tolerant reversible control unit in molecular quantum-dot cellular automata
NASA Astrophysics Data System (ADS)
Bahadori, Golnaz; Houshmand, Monireh; Zomorodi-Moghadam, Mariam
Quantum-dot cellular automata (QCA) is a promising emerging nanotechnology that has been attracting considerable attention due to its small feature size, ultra-low power consuming, and high clock frequency. Therefore, there have been many efforts to design computational units based on this technology. Despite these advantages of the QCA-based nanotechnologies, their implementation is susceptible to a high error rate. On the other hand, using the reversible computing leads to zero bit erasures and no energy dissipation. As the reversible computation does not lose information, the fault detection happens with a high probability. In this paper, first we propose a fault-tolerant control unit using reversible gates which improves on the previous design. The proposed design is then synthesized to the QCA technology and is simulated by the QCADesigner tool. Evaluation results indicate the performance of the proposed approach.
Computer integrated documentation
NASA Technical Reports Server (NTRS)
Boy, Guy
1991-01-01
The main technical issues of the Computer Integrated Documentation (CID) project are presented. The problem of automation of documents management and maintenance is analyzed both from an artificial intelligence viewpoint and from a human factors viewpoint. Possible technologies for CID are reviewed: conventional approaches to indexing and information retrieval; hypertext; and knowledge based systems. A particular effort was made to provide an appropriate representation for contextual knowledge. This representation is used to generate context on hypertext links. Thus, indexing in CID is context sensitive. The implementation of the current version of CID is described. It includes a hypertext data base, a knowledge based management and maintenance system, and a user interface. A series is also presented of theoretical considerations as navigation in hyperspace, acquisition of indexing knowledge, generation and maintenance of a large documentation, and relation to other work.
Cryptography and the Internet: lessons and challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCurley, K.S.
1996-12-31
The popularization of the Internet has brought fundamental changes to the world, because it allows a universal method of communication between computers. This carries enormous benefits with it, but also raises many security considerations. Cryptography is a fundamental technology used to provide security of computer networks, and there is currently a widespread engineering effort to incorporate cryptography into various aspects of the Internet. The system-level engineering required to provide security services for the Internet carries some important lessons for researchers whose study is focused on narrowly defined problems. It also offers challenges to the cryptographic research community by raising newmore » questions not adequately addressed by the existing body of knowledge. This paper attempts to summarize some of these lessons and challenges for the cryptographic research community.« less
Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J
2016-12-08
Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was demonstrated. This model can readily be extended to accommodate multiple traits, multiple breeds, maternal effects, and additional random effects such as polygenic residual effects.
Computational Infrastructure for Geodynamics (CIG)
NASA Astrophysics Data System (ADS)
Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.
2004-12-01
Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.
An evaluation of Computational Fluid dynamics model for flood risk analysis
NASA Astrophysics Data System (ADS)
Di Francesco, Silvia; Biscarini, Chiara; Montesarchio, Valeria
2014-05-01
This work presents an analysis of the hydrological-hydraulic engineering requisites for Risk evaluation and efficient flood damage reduction plans. Most of the research efforts have been dedicated to the scientific and technical aspects of risk assessment, providing estimates of possible alternatives and of the risk associated. In the decision making process for mitigation plan, the contribute of scientist is crucial, due to the fact that Risk-Damage analysis is based on evaluation of flow field ,of Hydraulic Risk and on economical and societal considerations. The present paper will focus on the first part of process, the mathematical modelling of flood events which is the base for all further considerations. The evaluation of potential catastrophic damage consequent to a flood event and in particular to dam failure requires modelling of the flood with sufficient detail so to capture the spatial and temporal evolutions of the event, as well of the velocity field. Thus, the selection of an appropriate mathematical model to correctly simulate flood routing is an essential step. In this work we present the application of two 3D Computational fluid dynamics models to a synthetic and real case study in order to evaluate the correct evolution of flow field and the associated flood Risk . The first model is based on a opensource CFD platform called openFoam. Water flow is schematized with a classical continuum approach based on Navier-Stokes equation coupled with Volume of fluid (VOF) method to take in account the multiphase character of river bottom-water- air systems. The second model instead is based on the Lattice Boltzmann method, an innovative numerical fluid dynamics scheme based on Boltzmann's kinetic equation that represents the flow dynamics at the macroscopic level by incorporating a microscopic kinetic approach. Fluid is seen as composed by particles that can move and collide among them. Simulation results from both models are promising and congruent to experimental results available in literature, thought the LBM model requires less computational effort respect to the NS one.
Computational thinking in life science education.
Rubinstein, Amir; Chor, Benny
2014-11-01
We join the increasing call to take computational education of life science students a step further, beyond teaching mere programming and employing existing software tools. We describe a new course, focusing on enriching the curriculum of life science students with abstract, algorithmic, and logical thinking, and exposing them to the computational "culture." The design, structure, and content of our course are influenced by recent efforts in this area, collaborations with life scientists, and our own instructional experience. Specifically, we suggest that an effective course of this nature should: (1) devote time to explicitly reflect upon computational thinking processes, resisting the temptation to drift to purely practical instruction, (2) focus on discrete notions, rather than on continuous ones, and (3) have basic programming as a prerequisite, so students need not be preoccupied with elementary programming issues. We strongly recommend that the mere use of existing bioinformatics tools and packages should not replace hands-on programming. Yet, we suggest that programming will mostly serve as a means to practice computational thinking processes. This paper deals with the challenges and considerations of such computational education for life science students. It also describes a concrete implementation of the course and encourages its use by others.
2017-03-23
Consideration for Department of Defense Medical Facilities Erik B. Schuh Follow this and additional works at: https://scholar.afit.edu/etd Part of the...Citation Schuh, Erik B., "Examining Regionalization Efforts to Develop Lessons Learned and Consideration for Department of Defense Medical Facilities...Consideration for Department of Defense Medical Facilities THESIS Erik B. Schuh, 2Lt, USAF AFIT-ENS-MS-17-M-156 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR
Role of Information in Consumer Selection of Health Plans
Sainfort, François; Booske, Bridget C.
1996-01-01
Considerable efforts are underway in the public and private sectors to increase the amount of information available to consumers when making health plan choices. The objective of this study was to examine the role of information in consumer health plan decisionmaking. A computer system was developed which provides different plan descriptions with the option of accessing varying types and levels of information. The system tracked the information search processes and recorded the hypothetical plan choices of 202 subjects. Results are reported showing the relationship between information and problem perception, preference structure, choice of plan, and attitude towards the decision. PMID:10165036
Virtual Screening Approaches towards the Discovery of Toll-Like Receptor Modulators
Pérez-Regidor, Lucía; Zarioh, Malik; Ortega, Laura; Martín-Santamaría, Sonsoles
2016-01-01
This review aims to summarize the latest efforts performed in the search for novel chemical entities such as Toll-like receptor (TLR) modulators by means of virtual screening techniques. This is an emergent research field with only very recent (and successful) contributions. Identification of drug-like molecules with potential therapeutic applications for the treatment of a variety of TLR-regulated diseases has attracted considerable interest due to the clinical potential. Additionally, the virtual screening databases and computational tools employed have been overviewed in a descriptive way, widening the scope for researchers interested in the field. PMID:27618029
Percutaneous Coronary Intervention for a Patient with Left Main Coronary Compression Syndrome.
Ikegami, Ryutaro; Ozaki, Kazuyuki; Ozawa, Takuya; Hirono, Satoru; Ito, Masahiro; Minamino, Tohru
2018-05-15
Left main coronary compression syndrome rarely occurs in patients with severe pulmonary hypertension. A 65-year-old woman with severe pulmonary hypertension due to an atrial septal defect suffered from angina on effort. Cardiac computed-tomography and coronary angiography revealed considerable stenosis of the left main coronary artery (LMA) caused by compression between the dilated main pulmonary artery trunk and the sinus of valsalva. Stenting of the LMA under intravascular ultrasound imaging was effective for the treatment of angina. We herein report the diagnosis and management of this condition with a brief literature review.
Support for User Interfaces for Distributed Systems
NASA Technical Reports Server (NTRS)
Eychaner, Glenn; Niessner, Albert
2005-01-01
An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.
Geospatial Data as a Service: Towards planetary scale real-time analytics
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Larraondo, P. R.; Antony, J.; Richards, C. J.
2017-12-01
The rapid growth of earth systems, environmental and geophysical datasets poses a challenge to both end-users and infrastructure providers. For infrastructure and data providers, tasks like managing, indexing and storing large collections of geospatial data needs to take into consideration the various use cases by which consumers will want to access and use the data. Considerable investment has been made by the Earth Science community to produce suitable real-time analytics platforms for geospatial data. There are currently different interfaces that have been defined to provide data services. Unfortunately, there is considerable difference on the standards, protocols or data models which have been designed to target specific communities or working groups. The Australian National University's National Computational Infrastructure (NCI) is used for a wide range of activities in the geospatial community. Earth observations, climate and weather forecasting are examples of these communities which generate large amounts of geospatial data. The NCI has been carrying out significant effort to develop a data and services model that enables the cross-disciplinary use of data. Recent developments in cloud and distributed computing provide a publicly accessible platform where new infrastructures can be built. One of the key components these technologies offer is the possibility of having "limitless" compute power next to where the data is stored. This model is rapidly transforming data delivery from centralised monolithic services towards ubiquitous distributed services that scale up and down adapting to fluctuations in the demand. NCI has developed GSKY, a scalable, distributed server which presents a new approach for geospatial data discovery and delivery based on OGC standards. We will present the architecture and motivating use-cases that drove GSKY's collaborative design, development and production deployment. We show our approach offers the community valuable exploratory analysis capabilities, for dealing with petabyte-scale geospatial data collections.
Active-learning strategies in computer-assisted drug discovery.
Reker, Daniel; Schneider, Gisbert
2015-04-01
High-throughput compound screening is time and resource consuming, and considerable effort is invested into screening compound libraries, profiling, and selecting the most promising candidates for further testing. Active-learning methods assist the selection process by focusing on areas of chemical space that have the greatest chance of success while considering structural novelty. The core feature of these algorithms is their ability to adapt the structure-activity landscapes through feedback. Instead of full-deck screening, only focused subsets of compounds are tested, and the experimental readout is used to refine molecule selection for subsequent screening cycles. Once implemented, these techniques have the potential to reduce costs and save precious materials. Here, we provide a comprehensive overview of the various computational active-learning approaches and outline their potential for drug discovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Design considerations for a 10-kW integrated hydrogen-oxygen regenerative fuel cell system
NASA Technical Reports Server (NTRS)
Hoberecht, M. A.; Miller, T. B.; Rieker, L. L.; Gonzalez-Sanabria, O. D.
1984-01-01
Integration of an alkaline fuel cell subsystem with an alkaline electrolysis subsystem to form a regenerative fuel cell (RFC) system for low earth orbit (LEO) applications characterized by relatively high overall round trip electrical efficiency, long life, and high reliability is possible with present state of the art technology. A hypothetical 10 kW system computer modeled and studied based on data from ongoing contractual efforts in both the alkaline fuel cell and alkaline water electrolysis areas. The alkaline fuel cell technology is under development utilizing advanced cell components and standard Shuttle Orbiter system hardware. The alkaline electrolysis technology uses a static water vapor feed technique and scaled up cell hardware is developed. The computer aided study of the performance, operating, and design parameters of the hypothetical system is addressed.
Issues to be resolved in Torrents—Future Revolutionised File Sharing
NASA Astrophysics Data System (ADS)
Thanekar, Sachin Arun
2010-11-01
Torrenting is a highly popular peer to peer file sharing activity that allows participants to send and receive files from other computers. As it is an advantageous technique as compare to traditional client server file sharing in terms of time, cost and speed, some drawbaks are also there. Content unavailability, lack of anonymity, leechers, cheaters and download speed consistency are the major problems to sort out. Efforts are needed to resolve these problems and to make this better application. Legal issues are also one of the measure factors of consideration. BitTorrent metafiles themselves do not store copyrighted data. Whether the publishers of BitTorrent metafiles violate copyrights by linking to copyrighted material is controversial. Various countries have taken legal action against websites that host BitTorrent trackers. Eg. Supernova.org, Torrentspy. Efforts are also needed to make such a useful protocol legal.
Higher Order Chemistry Models in the CFD Simulation of Laser-Ablated Carbon Plumes
NASA Technical Reports Server (NTRS)
Greendyke, R. B.; Creel, J. R.; Payne, B. T.; Scott, C. D.
2005-01-01
Production of single-walled carbon nanotubes (SWNT) has taken place for a number of years and by a variety of methods such as laser ablation, chemical vapor deposition, and arc-jet ablation. Yet, little is actually understood about the exact chemical kinetics and processes that occur in SWNT formation. In recent time, NASA Johnson Space Center has devoted a considerable effort to the experimental evaluation of the laser ablation production process for SWNT originally developed at Rice University. To fully understand the nature of the laser ablation process it is necessary to understand the development of the carbon plume dynamics within the laser ablation oven. The present work is a continuation of previous studies into the efforts to model plume dynamics using computational fluid dynamics (CFD). The ultimate goal of the work is to improve understanding of the laser ablation process, and through that improved understanding, refine the laser ablation production of SWNT.
Polarity control in WSe2 double-gate transistors
NASA Astrophysics Data System (ADS)
Resta, Giovanni V.; Sutar, Surajit; Balaji, Yashwanth; Lin, Dennis; Raghavan, Praveen; Radu, Iuliana; Catthoor, Francky; Thean, Aaron; Gaillardon, Pierre-Emmanuel; de Micheli, Giovanni
2016-07-01
As scaling of conventional silicon-based electronics is reaching its ultimate limit, considerable effort has been devoted to find new materials and new device concepts that could ultimately outperform standard silicon transistors. In this perspective two-dimensional transition metal dichalcogenides, such as MoS2 and WSe2, have recently attracted considerable interest thanks to their electrical properties. Here, we report the first experimental demonstration of a doping-free, polarity-controllable device fabricated on few-layer WSe2. We show how modulation of the Schottky barriers at drain and source by a separate gate, named program gate, can enable the selection of the carriers injected in the channel, and achieved controllable polarity behaviour with ON/OFF current ratios >106 for both electrons and holes conduction. Polarity-controlled WSe2 transistors enable the design of compact logic gates, leading to higher computational densities in 2D-flatronics.
NASA Astrophysics Data System (ADS)
Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim
2015-11-01
Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.
Minkara, Mona S; Weaver, Michael N; Gorske, Jim; Bowers, Clifford R; Merz, Kenneth M
2015-08-11
There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth year graduate student in computational chemistry at the University of Florida. She is also blind. This account presents efforts conducted by an expansive team of university and student personnel in conjunction with Mona to adapt different portions of the graduate student curriculum to meet Mona's needs. The most important consideration is prior preparation of materials to assist with coursework and cumulative exams. Herein we present an account of the first four years of Mona's graduate experience hoping this will assist in the development of protocols for future blind and low-vision graduate students in computational chemistry.
Scale-Resolving simulations (SRS): How much resolution do we really need?
NASA Astrophysics Data System (ADS)
Pereira, Filipe M. S.; Girimaji, Sharath
2017-11-01
Scale-resolving simulations (SRS) are emerging as the computational approach of choice for many engineering flows with coherent structures. The SRS methods seek to resolve only the most important features of the coherent structures and model the remainder of the flow field with canonical closures. With reference to a typical Large-Eddy Simulation (LES), practical SRS methods aim to resolve a considerably narrower range of scales (reduced physical resolution) to achieve an adequate degree of accuracy at reasonable computational effort. While the objective of SRS is well-founded, the criteria for establishing the optimal degree of resolution required to achieve an acceptable level of accuracy are not clear. This study considers the canonical case of the flow around a circular cylinder to address the issue of `optimal' resolution. Two important criteria are developed. The first condition addresses the issue of adequate resolution of the flow field. The second guideline provides an assessment of whether the modeled field is canonical (stochastic) turbulence amenable to closure-based computations.
Code of Federal Regulations, 2014 CFR
2014-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... determined necessary for Year 2000 computer conversion efforts. 630.310 Section 630.310 Administrative... Scheduling of annual leave by employees determined necessary for Year 2000 computer conversion efforts. (a) Year 2000 computer conversion efforts are deemed to be an exigency of the public business for the...
Virtual Acoustics: Evaluation of Psychoacoustic Parameters
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.
An eco-hydrologic model of malaria outbreaks
NASA Astrophysics Data System (ADS)
Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.
2012-03-01
Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission and their consideration alongside climatic datasets. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear eco-hydrologic model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.
Initial Computations of Vertical Displacement Events with NIMROD
NASA Astrophysics Data System (ADS)
Bunkers, Kyle; Sovinec, C. R.
2014-10-01
Disruptions associated with vertical displacement events (VDEs) have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on initial computations of generic axisymmetric VDEs using the NIMROD code [Sovinec et al., JCP 195, 355 (2004)]. An implicit thin-wall computation has been implemented to couple separate internal and external regions without numerical stability limitations. A simple rectangular cross-section domain generated with the NIMEQ code [Howell and Sovinec, CPC (2014)] modified to use a symmetry condition at the midplane is used to test linear and nonlinear axisymmetric VDE computation. As current in simulated external coils for large- R / a cases is varied, there is a clear n = 0 stability threshold which lies below the decay-index criterion for the current-loop model of a tokamak to model VDEs [Mukhovatov and Shafranov, Nucl. Fusion 11, 605 (1971)]; a scan of wall distance indicates the offset is due to the influence of the conducting wall. Results with a vacuum region surrounding a resistive wall will also be presented. Initial nonlinear computations show large vertical displacement of an intact simulated tokamak. This effort is supported by U.S. Department of Energy Grant DE-FG02-06ER54850.
Coupled transverse and torsional vibrations in a mechanical system with two identical beams
NASA Astrophysics Data System (ADS)
Vlase, S.; Marin, M.; Scutaru, M. L.; Munteanu, R.
2017-06-01
The paper aims to study a plane system with bars, with certain symmetries. Such problems can be encountered frequently in industry and civil engineering. Considerations related to the economy of the design process, constructive simplicity, cost and logistics make the use of identical parts a frequent procedure. The paper aims to determine the properties of the eigenvalues and eigenmodes for transverse and torsional vibrations of a mechanical system where two of the three component bars are identical. The determination of these properties allows the calculus effort and the computation time and thus increases the accuracy of the results in such matters.
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
Minimizing Dispersion in FDTD Methods with CFL Limit Extension
NASA Astrophysics Data System (ADS)
Sun, Chen
The CFL extension in FDTD methods is receiving considerable attention in order to reduce the computational effort and save the simulation time. One of the major issues in the CFL extension methods is the increased dispersion. We formulate a decomposition of FDTD equations to study the behaviour of the dispersion. A compensation scheme to reduce the dispersion in CFL extension is constructed and proposed. We further study the CFL extension in a FDTD subgridding case, where we improve the accuracy by acting only on the FDTD equations of the fine grid. Numerical results confirm the efficiency of the proposed method for minimising dispersion.
Architecture independent environment for developing engineering software on MIMD computers
NASA Technical Reports Server (NTRS)
Valimohamed, Karim A.; Lopez, L. A.
1990-01-01
Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.
Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.
Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam
2016-03-18
Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.
Analytical solutions for coagulation and condensation kinetics of composite particles
NASA Astrophysics Data System (ADS)
Piskunov, Vladimir N.
2013-04-01
The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.
An Excel sheet for inferring children's number-knower levels from give-N data.
Negen, James; Sarnecka, Barbara W; Lee, Michael D
2012-03-01
Number-knower levels are a series of stages of number concept development in early childhood. A child's number-knower level is typically assessed using the give-N task. Although the task procedure has been highly refined, the standard ways of analyzing give-N data remain somewhat crude. Lee and Sarnecka (Cogn Sci 34:51-67, 2010, in press) have developed a Bayesian model of children's performance on the give-N task that allows knower level to be inferred in a more principled way. However, this model requires considerable expertise and computational effort to implement and apply to data. Here, we present an approximation to the model's inference that can be computed with Microsoft Excel. We demonstrate the accuracy of the approximation and provide instructions for its use. This makes the powerful inferential capabilities of the Bayesian model accessible to developmental researchers interested in estimating knower levels from give-N data.
NASA Technical Reports Server (NTRS)
Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.
2005-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Technical Reports Server (NTRS)
Martin, Michael A.; Nguyen, Huy H.; Greene, William D.; Seymout, David C.
2003-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
Garde, Sebastian; Hovenga, Evelyn; Buck, Jasmin; Knaup, Petra
2006-01-01
Ubiquitous computing requires ubiquitous access to information and knowledge. With the release of openEHR Version 1.0 there is a common model available to solve some of the problems related to accessing information and knowledge by improving semantic interoperability between clinical systems. Considerable work has been undertaken by various bodies to standardise Clinical Data Sets. Notwithstanding their value, several problems remain unsolved with Clinical Data Sets without the use of a common model underpinning them. This paper outlines these problems like incompatible basic data types and overlapping and incompatible definitions of clinical content. A solution to this based on openEHR archetypes is motivated and an approach to transform existing Clinical Data Sets into archetypes is presented. To avoid significant overlaps and unnecessary effort during archetype development, archetype development needs to be coordinated nationwide and beyond and also across the various health professions in a formalized process.
Los Alamos radiation transport code system on desktop computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less
Simple chained guide trees give high-quality protein multiple sequence alignments
Boyce, Kieran; Sievers, Fabian; Higgins, Desmond G.
2014-01-01
Guide trees are used to decide the order of sequence alignment in the progressive multiple sequence alignment heuristic. These guide trees are often the limiting factor in making large alignments, and considerable effort has been expended over the years in making these quickly or accurately. In this article we show that, at least for protein families with large numbers of sequences that can be benchmarked with known structures, simple chained guide trees give the most accurate alignments. These also happen to be the fastest and simplest guide trees to construct, computationally. Such guide trees have a striking effect on the accuracy of alignments produced by some of the most widely used alignment packages. There is a marked increase in accuracy and a marked decrease in computational time, once the number of sequences goes much above a few hundred. This is true, even if the order of sequences in the guide tree is random. PMID:25002495
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, Peter R.
1993-01-01
The main focus was the development, implementation, and calibration of methods for performing molecular electronic structure calculations to high accuracy. These various methods were then applied to a number of chemical reactions and species of interest to NASA, notably in the area of combustion chemistry. Among the development work undertaken was a collaborative effort to develop a program to efficiently predict molecular structures and vibrational frequencies using energy derivatives. Another major development effort involved the design of new atomic basis sets for use in chemical studies: these sets were considerably more accurate than those previously in use. Much effort was also devoted to calibrating methods for computing accurate molecular wave functions, including the first reliable calibrations for realistic molecules using full CI results. A wide variety of application calculations were undertaken. One area of interest was the spectroscopy and thermochemistry of small molecules, including establishing small molecule binding energies to an accuracy rivaling, or even on occasion surpassing, the experiment. Such binding energies are essential input to modeling chemical reaction processes, such as combustion. Studies of large molecules and processes important in both hydrogen and hydrocarbon combustion chemistry were also carried out. Finally, some effort was devoted to the structure and spectroscopy of small metal clusters, with applications to materials science problems.
Geospatial considerations for a multiorganizational, landscape-scale program
O'Donnell, Michael S.; Assal, Timothy J.; Anderson, Patrick J.; Bowen, Zachary H.
2013-01-01
Geospatial data play an increasingly important role in natural resources management, conservation, and science-based projects. The management and effective use of spatial data becomes significantly more complex when the efforts involve a myriad of landscape-scale projects combined with a multiorganizational collaboration. There is sparse literature to guide users on this daunting subject; therefore, we present a framework of considerations for working with geospatial data that will provide direction to data stewards, scientists, collaborators, and managers for developing geospatial management plans. The concepts we present apply to a variety of geospatial programs or projects, which we describe as a “scalable framework” of processes for integrating geospatial efforts with management, science, and conservation initiatives. Our framework includes five tenets of geospatial data management: (1) the importance of investing in data management and standardization, (2) the scalability of content/efforts addressed in geospatial management plans, (3) the lifecycle of a geospatial effort, (4) a framework for the integration of geographic information systems (GIS) in a landscape-scale conservation or management program, and (5) the major geospatial considerations prior to data acquisition. We conclude with a discussion of future considerations and challenges.
Bruno Garza, J L; Eijckelhof, B H W; Johnson, P W; Raina, S M; Rynell, P W; Huysmans, M A; van Dieën, J H; van der Beek, A J; Blatter, B M; Dennerlein, J T
2012-01-01
This study, a part of the PRedicting Occupational biomechanics in OFfice workers (PROOF) study, investigated whether there are differences in field-measured forces, muscle efforts, postures, velocities and accelerations across computer activities. These parameters were measured continuously for 120 office workers performing their own work for two hours each. There were differences in nearly all forces, muscle efforts, postures, velocities and accelerations across keyboard, mouse and idle activities. Keyboard activities showed a 50% increase in the median right trapezius muscle effort when compared to mouse activities. Median shoulder rotation changed from 25 degrees internal rotation during keyboard use to 15 degrees external rotation during mouse use. Only keyboard use was associated with median ulnar deviations greater than 5 degrees. Idle activities led to the greatest variability observed in all muscle efforts and postures measured. In future studies, measurements of computer activities could be used to provide information on the physical exposures experienced during computer use. Practitioner Summary: Computer users may develop musculoskeletal disorders due to their force, muscle effort, posture and wrist velocity and acceleration exposures during computer use. We report that many physical exposures are different across computer activities. This information may be used to estimate physical exposures based on patterns of computer activities over time.
New Perspectives on Neuroengineering and Neurotechnologies: NSF-DFG Workshop Report.
Moritz, Chet T; Ruther, Patrick; Goering, Sara; Stett, Alfred; Ball, Tonio; Burgard, Wolfram; Chudler, Eric H; Rao, Rajesh P N
2016-07-01
To identify and overcome barriers to creating new neurotechnologies capable of restoring both motor and sensory function in individuals with neurological conditions. This report builds upon the outcomes of a joint workshop between the US National Science Foundation and the German Research Foundation on New Perspectives in Neuroengineering and Neurotechnology convened in Arlington, VA, USA, November 13-14, 2014. The participants identified key technological challenges for recording and manipulating neural activity, decoding, and interpreting brain data in the presence of plasticity, and early considerations of ethical and social issues pertinent to the adoption of neurotechnologies. The envisaged progress in neuroengineering requires tightly integrated hardware and signal processing efforts, advances in understanding of physiological adaptations to closed-loop interactions with neural devices, and an open dialog with stakeholders and potential end-users of neurotechnology. The development of new neurotechnologies (e.g., bidirectional brain-computer interfaces) could significantly improve the quality of life of people living with the effects of brain or spinal cord injury, or other neurodegenerative diseases. Focused efforts aimed at overcoming the remaining barriers at the electrode tissue interface, developing implantable hardware with on-board computation, and refining stimulation methods to precisely activate neural tissue will advance both our understanding of brain function and our ability to treat currently intractable disorders of the nervous system.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Advanced Simulation and Computing: A Summary Report to the Director's Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M G; Peck, T
2003-06-01
It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way tomore » sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.« less
The potential of crowdsourcing and mobile technology to support flood disaster risk reduction
NASA Astrophysics Data System (ADS)
See, Linda; McCallum, Ian; Liu, Wei; Mechler, Reinhard; Keating, Adriana; Hochrainer-Stigler, Stefan; Mochizuki, Junko; Fritz, Steffen; Dugar, Sumit; Arestegui, Michael; Szoenyi, Michael; Laso-Bayas, Juan-Carlos; Burek, Peter; French, Adam; Moorthy, Inian
2016-04-01
The last decade has seen a rise in citizen science and crowdsourcing for carrying out a variety of tasks across a number of different fields, most notably the collection of data such as the identification of species (e.g. eBird and iNaturalist) and the classification of images (e.g. Galaxy Zoo and Geo-Wiki). Combining human computing with the proliferation of mobile technology has resulted in vast amounts of geo-located data that have considerable value across multiple domains including flood disaster risk reduction. Crowdsourcing technologies, in the form of online mapping, are now being utilized to great effect in post-disaster mapping and relief efforts, e.g. the activities of Humanitarian OpenStreetMap, complementing official channels of relief (e.g. Haiti, Nepal and New York). Disaster event monitoring efforts have been further complemented with the use of social media (e.g. twitter for earthquakes, flood monitoring, and fire detection). Much of the activity in this area has focused on ex-post emergency management while there is considerable potential for utilizing crowdsourcing and mobile technology for vulnerability assessment, early warning and to bolster resilience to flood events. This paper examines the use of crowdsourcing and mobile technology for measuring and monitoring flood hazards, exposure to floods, and vulnerability, drawing upon examples from the literature and ongoing projects on flooding and food security at IIASA.
Design considerations for computationally constrained two-way real-time video communication
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar M.; Saunders, Steven E.; Ralston, John D.
2009-08-01
Today's video codecs have evolved primarily to meet the requirements of the motion picture and broadcast industries, where high-complexity studio encoding can be utilized to create highly-compressed master copies that are then broadcast one-way for playback using less-expensive, lower-complexity consumer devices for decoding and playback. Related standards activities have largely ignored the computational complexity and bandwidth constraints of wireless or Internet based real-time video communications using devices such as cell phones or webcams. Telecommunications industry efforts to develop and standardize video codecs for applications such as video telephony and video conferencing have not yielded image size, quality, and frame-rate performance that match today's consumer expectations and market requirements for Internet and mobile video services. This paper reviews the constraints and the corresponding video codec requirements imposed by real-time, 2-way mobile video applications. Several promising elements of a new mobile video codec architecture are identified, and more comprehensive computational complexity metrics and video quality metrics are proposed in order to support the design, testing, and standardization of these new mobile video codecs.
Operational Use of GPS Navigation for Space Shuttle Entry
NASA Technical Reports Server (NTRS)
Goodman, John L.; Propst, Carolyn A.
2008-01-01
The STS-118 flight of the Space Shuttle Endeavour was the first shuttle mission flown with three Global Positioning System (GPS) receivers in place of the three legacy Tactical Air Navigation (TACAN) units. This marked the conclusion of a 15 year effort involving procurement, missionization, integration, and flight testing of a GPS receiver and a parallel effort to formulate and implement shuttle computer software changes to support GPS. The use of GPS data from a single receiver in parallel with TACAN during entry was successfully demonstrated by the orbiters Discovery and Atlantis during four shuttle missions in 2006 and 2007. This provided the confidence needed before flying the first all GPS, no TACAN flight with Endeavour. A significant number of lessons were learned concerning the integration of a software intensive navigation unit into a legacy avionics system. These lessons have been taken into consideration during vehicle design by other flight programs, including the vehicle that will replace the Space Shuttle, Orion.
Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C
2015-01-01
The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.
Quantitative computational models of molecular self-assembly in systems biology
Thomas, Marcus; Schwartz, Russell
2017-01-01
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally. PMID:28535149
Quantitative computational models of molecular self-assembly in systems biology.
Thomas, Marcus; Schwartz, Russell
2017-05-23
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.
Technical and investigative support for high density digital satellite recording systems
NASA Technical Reports Server (NTRS)
Schultz, R. A.
1983-01-01
Recent results of dropout measurements and defect analysis conducted on one reel of Ampex 721 which was submitted for evaluation by the manufacturer are described. The results or status of other tape evaluation activities are also reviewed. Several changes in test interpretations and applications are recommended. In some cases, deficiencies in test methods or equipment became apparent during continued work on this project and other IITRI tape evaluation projects. Techniques and equipment for future tasks such as tape qualification are also recommended and discussed. Project effort and expenditures were kept at a relatively low level. This rate provided added development time and experience with the IITRI Dropout Measurement System, which is approaching its potential as a computer based dropout analysis tool. Another benefit is the expanded data base on critical parameters that can be achieved from tests on different tape types and lots as they become available. More consideration and effort was directed toward identification of critical parameters, development of meaningful repeatable test procedures, and tape procurement strategy.
Phase change materials handbook
NASA Technical Reports Server (NTRS)
Hale, D. V.; Hoover, M. J.; Oneill, M. J.
1971-01-01
This handbook is intended to provide theory and data needed by the thermal design engineer to bridge the gap between research achievements and actual flight systems, within the limits of the current state of the art of phase change materials (PCM) technology. The relationship between PCM and more conventional thermal control techniques is described and numerous space and terrestrial applications of PCM are discussed. Material properties of the most promising PCMs are provided; the purposes and use of metallic filler materials in PCM composites are presented; and material compatibility considerations relevant to PCM design are included. The engineering considerations of PCM design are described, especially those pertaining to the thermodynamic and heat transfer phenomena peculiar to PCM design. Methods of obtaining data not currently available are presented. The special problems encountered in the space environment are described. Computational tools useful to the designer are discussed. In summary, each aspect of the PCM problem important to the design engineer is covered to the extent allowed by the scope of this effort and the state of the art.
Herpetological Monitoring Using a Pitfall Trapping Design in Southern California
Fisher, Robert; Stokes, Drew; Rochester, Carlton; Brehme, Cheryl; Hathaway, Stacie; Case, Ted
2008-01-01
The steps necessary to conduct a pitfall trapping survey for small terrestrial vertebrates are presented. Descriptions of the materials needed and the methods to build trapping equipment from raw materials are discussed. Recommended data collection techniques are given along with suggested data fields. Animal specimen processing procedures, including toe- and scale-clipping, are described for lizards, snakes, frogs, and salamanders. Methods are presented for conducting vegetation surveys that can be used to classify the environment associated with each pitfall trap array. Techniques for data storage and presentation are given based on commonly use computer applications. As with any study, much consideration should be given to the study design and methods before beginning any data collection effort.
Inverse finite-size scaling for high-dimensional significance analysis
NASA Astrophysics Data System (ADS)
Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki
2018-06-01
We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.
Digital autopilots: Design considerations and simulator evaluations
NASA Technical Reports Server (NTRS)
Osder, S.; Neuman, F.; Foster, J.
1971-01-01
The development of a digital autopilot program for a transport aircraft and the evaluation of that system's performance on a transport aircraft simulator is discussed. The digital autopilot includes three axis attitude stabilization, automatic throttle control and flight path guidance functions with emphasis on the mode progression from descent into the terminal area through automatic landing. The study effort involved a sequence of tasks starting with the definition of detailed system block diagrams of control laws followed by a flow charting and programming phase and concluding with performance verification using the transport aircraft simulation. The autopilot control laws were programmed in FORTRAN 4 in order to isolate the design process from requirements peculiar to an individual computer.
Considerations for Explosively Driven Conical Shock Tube Design: Computations and Experiments
2017-02-16
ARL-TR-7953 ● FEB 2017 US Army Research Laboratory Considerations for Explosively Driven Conical Shock Tube Design : Computations...The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...Considerations for Explosively Driven Conical Shock Tube Designs : Computations and Experiments by Joel B Stewart Weapons and Materials Research Directorate
Quantum Monte Carlo: Faster, More Reliable, And More Accurate
NASA Astrophysics Data System (ADS)
Anderson, Amos Gerald
2010-06-01
The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.
ERIC Educational Resources Information Center
Berg, A. I.; And Others
Five articles which were selected from a Russian language book on cybernetics and then translated are presented here. They deal with the topics of: computer-developed computers, heuristics and modern sciences, linguistics and practice, cybernetics and moral-ethical considerations, and computer chess programs. (Author/JY)
Using Computational Toxicology to Enable Risk-Based ...
presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment. Slide presentation at Drug Safety Gordon Research Conference 2016 on research efforts in NCCT to enable Computational Toxicology to support risk assessment.
EPA’s National Center for Computational Toxicology is engaged in high-profile research efforts to improve the ability to more efficiently and effectively prioritize and screen thousands of environmental chemicals for potential toxicity. A central component of these efforts invol...
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Profiling characteristics of internet medical information users.
Weaver, James B; Mays, Darren; Lindner, Gregg; Eroglu, Dogan; Fridinger, Frederick; Bernhardt, Jay M
2009-01-01
The Internet's potential to bolster health promotion and disease prevention efforts has attracted considerable attention. Existing research leaves two things unclear, however: the prevalence of online health and medical information seeking and the distinguishing characteristics of individuals who seek that information. This study seeks to clarify and extend the knowledge base concerning health and medical information use online by profiling adults using Internet medical information (IMI). Secondary analysis of survey data from a large sample (n = 6,119) representative of the Atlanta, GA, area informed this investigation. Five survey questions were used to assess IMI use and general computer and Internet use during the 30 days before the survey was administered. Five questions were also used to assess respondents' health care system use. Several demographic characteristics were measured. RESULTS Contrary to most prior research, this study found relatively low prevalence of IMI-seeking behavior. Specifically, IMI use was reported by 13.2% of all respondents (n = 6,119) and by 21.1% of respondents with Internet access (n = 3,829). Logistic regression models conducted among respondents accessing the Internet in the previous 30 days revealed that, when controlling for several sociodemographic characteristics, home computer ownership, online time per week, and health care system use are all positively linked with IMI-seeking behavior. The data suggest it may be premature to embrace unilaterally the Internet as an effective asset for health promotion and disease prevention efforts that target the public.
EMCORE - Emotional Cooperative Groupware
NASA Astrophysics Data System (ADS)
Fasoli, N.; Messina, A.
In the last years considerable effort has been spent to develop groupware applications. Despite this, no general consenus has been met by groupware applications in computer field. Interdisciplinary approach could prove very useful to overcome these difficulties. A workgroup is not simply a set of people gathered together, working for a common goal. It can also be thought as a strong, hard mental reality. Actually, sociological and psychological definitions of group differ considerably. At sociological level a group is generally described in the view of the activities and events occurring inside the group itself. On the other hand, the psychological group approach considers not only the actions occurring inside the group, but also all the mental activities originated by belonging to the group, be they emotional or rational nature. Since early '60 simple work group (i.e. discussion group) has been analyzed in his psychological behavior. EMCORE is a prototype which aims to support computer science methods with psychological approach. The tool has been developed for a discussion group supported by heterogeneous distributed systems and has been implemented according to the CORBA abstraction augmented by the machine independent JAVA language. The tool allows all the common activities of a discussion group: discussion by voice or by chatting board if multimedia device are not present; discussion and elaboration of a shared document by text and/or graphic editor. At the same time, tools are provided for the psychoanalytic approach, according to a specific methodology.
Cognitive Load and Listening Effort: Concepts and Age-Related Considerations.
Lemke, Ulrike; Besser, Jana
2016-01-01
Listening effort has been recognized as an important dimension of everyday listening, especially with regard to the comprehension of spoken language. At constant levels of comprehension performance, the level of effort exerted and perceived during listening can differ considerably across listeners and situations. In this article, listening effort is used as an umbrella term for two different types of effort that can arise during listening. One of these types is processing effort, which is used to denote the utilization of "extra" mental processing resources in listening conditions that are adverse for an individual. A conceptual description is introduced how processing effort could be defined in terms of situational influences, the listener's auditory and cognitive resources, and the listener's personal state. Also, the proposed relationship between processing effort and subjectively perceived listening effort is discussed. Notably, previous research has shown that the availability of mental resources, as well as the ability to use them efficiently, changes over the course of adult aging. These common age-related changes in cognitive abilities and their neurocognitive organization are discussed in the context of the presented concept, especially regarding situations in which listening effort may be increased for older people.
Alcohol and Drug Abuse Intervention and Prevention Program. Annual Report 1988-89.
ERIC Educational Resources Information Center
Rapaport, Ross J.
Institutions of higher learning are taking responsibility for and becoming part of the societal effort to combat alcohol/drug problems. There are a number of national and state efforts which specifically target higher education for prevention, education, intervention, treatment, and referral efforts. Considerable efforts are currently underway to…
Medical Information & Technology: Rapidly Expanding Vast Horizons
NASA Astrophysics Data System (ADS)
Sahni, Anil K.
2012-12-01
During ÑMedical Council Of India?, Platinum Jubilee Year (1933-2008) Celebrations, In Year 2008, Several Scientific Meeting/Seminar/Symposium, On Various Topics Of Contemporary Importance And Relevance In The Field Of ÑMedical Education And Ethics?, Were Organized, By Different Medical Colleges At Various Local, State, National Levels. The Present Discussion, Is An Comprehensive Summary Of Various Different Aspects of ìMedical Information Communication Technologyî, Especially UseFul For The Audience Stratum Group Of Those Amateur Medical & Paramedical Staff, With No Previous Work Experience Knowledge Of Computronics Applications. Outlining The, i.Administration Applications: Medical Records Etc, ii. Clinical Applications: Pros pective Scope Of TeleMedicine Applicabilities Etc iii. Other Applications: Efforts To Augment Improvement Of Medical Education, Medical Presentations, Medical Education And Research Etc. ÑMedical Trancription? & Related Recent Study Fields e.g ÑModern Pharmaceuticals?,ÑBio-Engineering?, ÑBio-Mechanics?, ÑBio-Technology? Etc., Along With Important Aspects Of Computers-General Considerations, Computer Ergonomics Assembled To Summarize, The AwareNess Regarding Basic Fundamentals Of Medical Computronics & Its Practically SuccessFul Utilities.
Real time target allocation in cooperative unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kudleppanavar, Ganesh
The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.
Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device.
Park, Sangsu; Noh, Jinwoo; Choo, Myung-Lae; Sheri, Ahmad Muqeem; Chang, Man; Kim, Young-Bae; Kim, Chang Jung; Jeon, Moongu; Lee, Byung-Geun; Lee, Byoung Hun; Hwang, Hyunsang
2013-09-27
Efforts to develop scalable learning algorithms for implementation of networks of spiking neurons in silicon have been hindered by the considerable footprints of learning circuits, which grow as the number of synapses increases. Recent developments in nanotechnologies provide an extremely compact device with low-power consumption.In particular, nanoscale resistive switching devices (resistive random-access memory (RRAM)) are regarded as a promising solution for implementation of biological synapses due to their nanoscale dimensions, capacity to store multiple bits and the low energy required to operate distinct states. In this paper, we report the fabrication, modeling and implementation of nanoscale RRAM with multi-level storage capability for an electronic synapse device. In addition, we first experimentally demonstrate the learning capabilities and predictable performance by a neuromorphic circuit composed of a nanoscale 1 kbit RRAM cross-point array of synapses and complementary metal-oxide-semiconductor neuron circuits. These developments open up possibilities for the development of ubiquitous ultra-dense, ultra-low-power cognitive computers.
3D Reconnection and SEP Considerations in the CME-Flare Problem
NASA Astrophysics Data System (ADS)
Moschou, S. P.; Cohen, O.; Drake, J. J.; Sokolov, I.; Borovikov, D.; Alvarado Gomez, J. D.; Garraffo, C.
2017-12-01
Reconnection is known to play a major role in particle acceleration in both solar and astrophysical regimes, yet little is known about its connection with the global scales and its comparative contribution in the generation of SEPs with respect to other acceleration mechanisms, such as the shock at a fast CME front, in the presence of a global structure such as a CME. Coupling efforts, combining both particle and global scales, are necessary to answer questions about the fundamentals of the energetic processes evolved. We present such a coupling modeling effort that looks into particle acceleration through reconnection in a self-consistent CME-flare model in both particle and fluid regimes. Of special interest is the supra-thermal component of the acceleration due to the reconnection that will at a later time interact colliding with the solar atmospheric material of the more dense chromospheric layer and radiate in hard X- and γ-rays for super-thermal electrons and protons respectively. Two cutting edge computational codes are used to capture the global CME and flare dynamics, specifically a two fluid MHD code and a 3D PIC code for the flare scales. Finally, we are connecting the simulations with current observations in different wavelengths in an effort to shed light to the unified CME-flare picture.
Geldermann, Ina; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M; Spreckelsen, Cord
2013-08-01
Usability aspects of different integration concepts for picture archiving and communication systems (PACS) and computer-aided diagnosis (CAD) were inquired on the example of BoneXpert, a program determining the skeletal age from a left hand's radiograph. CAD-PACS integration was assessed according to its levels: data, function, presentation, and context integration focusing on usability aspects. A user-based study design was selected. Statements of seven experienced radiologists using two alternative types of integration provided by BoneXpert were acquired and analyzed using a mixed-methods approach based on think-aloud records and a questionnaire. In both variants, the CAD module (BoneXpert) was easily integrated in the workflow, found comprehensible and fitting in the conceptual framework of the radiologists. Weak points of the software integration referred to data and context integration. Surprisingly, visualization of intermediate image processing states (presentation integration) was found less important as compared to efficient handling and fast computation. Seamlessly integrating CAD into the PACS without additional work steps or unnecessary interrupts and without visualizing intermediate images may considerably improve software performance and user acceptance with efforts in time.
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
32 CFR 169a.17 - Solicitation considerations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Solicitation considerations. 169a.17 Section 169a.17 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.17 Solicitation considerations. (a) Every effort...
32 CFR 169a.17 - Solicitation considerations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 1 2013-07-01 2013-07-01 false Solicitation considerations. 169a.17 Section 169a.17 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.17 Solicitation considerations. (a) Every effort...
32 CFR 169a.17 - Solicitation considerations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 1 2011-07-01 2011-07-01 false Solicitation considerations. 169a.17 Section 169a.17 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE DEFENSE CONTRACTING COMMERCIAL ACTIVITIES PROGRAM PROCEDURES Procedures § 169a.17 Solicitation considerations. (a) Every effort...
A Systematic Approach for Obtaining Performance on Matrix-Like Operations
NASA Astrophysics Data System (ADS)
Veras, Richard Michael
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite
NASA Astrophysics Data System (ADS)
Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin
2017-09-01
State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.
A historical perspective of the YF-12A thermal loads and structures program
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.; Quinn, Robert D.
1996-01-01
Around 1970, the Y-F-12A loads and structures efforts focused on numerous technological issues that needed defining with regard to aircraft that incorporate hot structures in the design. Laboratory structural heating test technology with infrared systems was largely created during this program. The program demonstrated the ability to duplicate the complex flight temperatures of an advanced supersonic airplane in a ground-based laboratory. The ability to heat and load an advanced operational aircraft in a laboratory at high temperatures and return it to flight status without adverse effects was demonstrated. The technology associated with measuring loads with strain gages on a hot structure was demonstrated with a thermal calibration concept. The results demonstrated that the thermal stresses were significant although the airplane was designed to reduce thermal stresses. Considerable modeling detail was required to predict the heat transfer and the corresponding structural characteristics. The overall YF-12A research effort was particularly productive, and a great deal of flight, laboratory, test and computational data were produced and cross-correlated.
Source processes for the probabilistic assessment of tsunami hazards
Geist, Eric L.; Lynett, Patrick J.
2014-01-01
The importance of tsunami hazard assessment has increased in recent years as a result of catastrophic consequences from events such as the 2004 Indian Ocean and 2011 Japan tsunamis. In particular, probabilistic tsunami hazard assessment (PTHA) methods have been emphasized to include all possible ways a tsunami could be generated. Owing to the scarcity of tsunami observations, a computational approach is used to define the hazard. This approach includes all relevant sources that may cause a tsunami to impact a site and all quantifiable uncertainty. Although only earthquakes were initially considered for PTHA, recent efforts have also attempted to include landslide tsunami sources. Including these sources into PTHA is considerably more difficult because of a general lack of information on relating landslide area and volume to mean return period. The large variety of failure types and rheologies associated with submarine landslides translates to considerable uncertainty in determining the efficiency of tsunami generation. Resolution of these and several other outstanding problems are described that will further advance PTHA methodologies leading to a more accurate understanding of tsunami hazard.
Design and performance considerations of evaporative-pad, waste-heat greenhouses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olszewski, M.
1978-01-01
Rising fuel costs and limited fuel availability have forced greenhouse operators to seek alternative means of heating their greenhouses in an effort to reduce production costs and conserve energy. One such alternative uses power plant reject heat, which is contained in the condenser cooling water, and a bank of evaporative pads to provide winter heating. The design technique used to size the evaporative pad system to meet both summer cooling and winter heating demands is described. Additionally, a computational scheme that simulates the system performance is presented. This analytical model is used to determine the greenhouse operating conditions that maintainmore » the vegetation in its thermal comfort zone. The evaporative pad model uses the Merkel total heat approximation and an experimentally derived transfer coefficient. Energy balance considerations on the vegetation provide a means of viewing optimal vegetation growth in terms of greenhouse environmental factors. In general, the results indicate that the vegetation can be maintained within its thermal comfort zone if sufficient warm water is available to the pads and the air stream flow is properly adjusted.« less
Proposed Directions for Research in Computer-Based Education.
ERIC Educational Resources Information Center
Waugh, Michael L.
Several directions for potential research efforts in the field of computer-based education (CBE) are discussed. (For the purposes of this paper, CBE is defined as any use of computers to promote learning with no intended inference as to the specific nature or organization of the educational application under discussion.) Efforts should be directed…
NASA Astrophysics Data System (ADS)
Yoon, S.
2016-12-01
To define geodetic reference frame using GPS data collected by Continuously Operating Reference Stations (CORS) network, historical GPS data needs to be reprocessed regularly. Reprocessing GPS data collected by upto 2000 CORS sites for the last two decades requires a lot of computational resource. At National Geodetic Survey (NGS), there has been one completed reprocessing in 2011, and currently, the second reprocessing is undergoing. For the first reprocessing effort, in-house computing resource was utilized. In the current second reprocessing effort, outsourced cloud computing platform is being utilized. In this presentation, the outline of data processing strategy at NGS is described as well as the effort to parallelize the data processing procedure in order to maximize the benefit of the cloud computing. The time and cost savings realized by utilizing cloud computing approach will also be discussed.
Thermal modeling of grinding for process optimization and durability improvements
NASA Astrophysics Data System (ADS)
Hanna, Ihab M.
Both thermal and mechanical aspects of the grinding process are investigated in detail in an effort to predict grinding induced residual stresses. An existing thermal model is used as a foundation for computing heat partitions and temperatures in surface grinding. By numerically processing data from IR temperature measurements of the grinding zone; characterizations are made of the grinding zone heat flux. It is concluded that the typical heat flux profile in the grinding zone is triangular in shape, supporting this often used assumption found in the literature. Further analyses of the computed heat flux profiles has revealed that actual grinding zone contact lengths exceed geometric contact lengths by an average of 57% for the cases considered. By integrating the resulting heat flux profiles; workpiece energy partitions are computed for several cases of dry conventional grinding of hardened steel. The average workpiece energy partition for the cases considered was 37%. In an effort to more accurately predict grinding zone temperatures and heat fluxes, refinements are made to the existing thermal model. These include consideration of contact length extensions due to local elastic deformations, variations of the assumed contact area ratio as a function of grinding process parameters, consideration of coolant latent heat of vaporization and its effect on heat transfer beyond the coolant boiling point, and incorporation of coolant-workpiece convective heat flux effects outside the grinding zone. The result of the model refinements accounting for contact length extensions and process-dependant contact area ratios is excellent agreement with IR temperature measurements over a wide range of grinding conditions. By accounting for latent heat of vaporization effects, grinding zone temperature profiles are shown to be capable of reproducing measured profiles found in the literature for cases on the verge of thermal surge conditions. Computed peak grinding zone temperatures for the aggressive grinding examples given are 30--50% lower than those computed using the existing thermal model formulation. By accounting for convective heat transfer effects outside the grinding zone, it is shown that while surface temperatures in the wake of the grinding zone may be significantly affected under highly convective conditions, computed residual stresses are less sensitive to such conditions. Numerical models are used to evaluate both thermally and mechanically induced stress fields in an elastic workpiece, while finite element modeling is used to evaluate residual stresses for workpieces with elastic-plastic material properties. Modeling of mechanical interactions at the local grit-workpiece length scale is used to create the often measured effect of compressive surface residual stress followed by a subsurface tensile peak. The model is shown to be capable of reproducing trends found in the literature of surface residual stresses which are compressive for low temperature grinding conditions, with surface stresses increasing linearly and becoming tensile with increasing temperatures. Further modifications to the finite element model are made to allow for transiently varying inputs for more complicated grinding processes of industrial components such as automotive cam lobes.
Modeling and Parameter Estimation of Spacecraft Fuel Slosh with Diaphragms Using Pendulum Analogs
NASA Technical Reports Server (NTRS)
Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Ristow, James; Suderman, James; Walker, Charles; Hubert, Carl
2007-01-01
Prediction and control of liquid slosh in moving containers is an important consideration in the design of spacecraft and launch vehicle control systems. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. It is still desirable to use some type of simplified mechanical analog for the slosh to shorten computation time. Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. This paper will describe efforts by the university component of a team comprised of NASA's Launch Services Program, Embry Riddle Aeronautical University, Southwest Research Institute and Hubert Astronautics to improve the accuracy and efficiency of modeling techniques used to predict these types of motions. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. The previous research was an effort to automate the process of slosh model parameter identification using a MATLAB/SimMechanics-based computer simulation. These results are the first step in applying the same computer estimation to a full-size tank and vehicle propulsion system. The introduction of diaphragms to this experimental set-up will aid in a better and more complete prediction of fuel slosh characteristics and behavior. Automating the parameter identification process will save time and thus allow earlier identification of potential vehicle performance problems.
NASA Technical Reports Server (NTRS)
Powell, Michael R.; Hall, W. A.
1993-01-01
It would be of operational significance if one possessed a device that would indicate the presence of gas phase formation in the body during hypobaric decompression. Automated analysis of Doppler gas bubble signals has been attempted for 2 decades but with generally unfavorable results, except with surgically implanted transducers. Recently, efforts have intensified with the introduction of low-cost computer programs. Current NASA work is directed towards the development of a computer-assisted method specifically targeted to EVA, and we are most interested in Spencer Grade 4. We note that Spencer Doppler Grades 1 to 3 have increased in the FFT sonogram and spectrogram in the amplitude domain, and the frequency domain is sometimes increased over that created by the normal blood flow envelope. The amplitude perturbations are of very short duration, in both systole and diastole and at random temporal positions. Grade 4 is characteristic in the amplitude domain but with modest increases in the FFT sonogram and spectral frequency power from 2K to 4K over all of the cardiac cycle. Heart valve motion appears to characteristic display signals: (1) the demodulated Doppler signal amplitude is considerably above the Doppler-shifted blow flow signal (even Grade 4); and (2) demodulated Doppler frequency shifts are considerably greater (often several kHz) than the upper edge of the blood flow envelope. Knowledge of these facts will aid in the construction of a real-time, computer-assisted discriminator to eliminate cardiac motion artifacts. There could also exist perturbations in the following: (1) modifications of the pattern of blood flow in accordance with Poiseuille's Law, (2) flow changes with a change in the Reynolds number, (3) an increase in the pulsatility index, and/or (4) diminished diastolic flow or 'runoff.' Doppler ultrasound devices have been constructed with a three-transducer array and a pulsed frequency generator.
Coordination and Collaboration in European Research towards Healthy and Safe Nanomaterials
NASA Astrophysics Data System (ADS)
Riediker, Michael
2011-07-01
Nanotechnology is becoming part of our daily life in a wide range of products such as computers, bicycles, sunscreens or nanomedicines. While these applications already become reality, considerable work awaits scientists, engineers, and policy makers, who want such nanotechnological products to yield a maximum of benefit at a minimum of social, environmental, economic and (occupational) health cost. Considerable efforts for coordination and collaboration in research are needed if one wants to reach these goals in a reasonable time frame and an affordable price tag. This is recognized in Europe by the European Commission which funds not only research projects but also supports the coordination of research efforts. One of these coordination efforts is NanoImpactNet, a researcher-operated network, which started in 2008 promote scientific cross-talk across all disciplines on the health and environmental impact of nanomaterials. Stakeholders contribute to these activities, notably the definition of research and knowledge needs. Initial discussions in this domain focused on finding an agreement on common metrics, and which elements are needed for standardized approaches for hazard and exposure identification. There are many nanomaterial properties that may play a role. Hence, to gain the time needed to study this complex matter full of uncertainties, researchers and stakeholders unanimously called for simple, easy and fast risk assessment tools that can support decision making in this rapidly moving and growing domain. Today, several projects are starting or already running that will develop such assessment tools. At the same time, other projects investigate in depth which factors and material properties can lead to unwanted toxicity or exposure, what mechanisms are involved and how such responses can be predicted and modelled. A vision for the future is that once these factors, properties and mechanisms are understood, they can and will be accounted for in the development of new products and production processes following the idea of "Safety by Design". The promise of all these efforts is a future with nanomaterials where most of their risks are recognized and addressed before they even reach the market.
ERIC Educational Resources Information Center
Ashcraft, Catherine
2015-01-01
To date, girls and women are significantly underrepresented in computer science and technology. Concerns about this underrepresentation have sparked a wealth of educational efforts to promote girls' participation in computing, but these programs have demonstrated limited impact on reversing current trends. This paper argues that this is, in part,…
Instream flow assessment and economic valuation: a survey of nonmarket benefits research
Douglas, Aaron J.; Johnson, Richard L.
1993-01-01
Instream flow benefits for United States streams and rivers have recently been investigated by a number of resource economists. These valuation efforts differ in scope, method, and quantitative results. An assessment and review of these valuation efforts is presented. The various sources of differences in non‐market values produced by these studies are explored in some detail. The considerable difficulty of producing estimates of instream flow benefits values that consider all of the pertinent policy and technical issues is delineated in various policy contexts. Evidence is presented that indicates that the considerable policy impact of recent research on this topic is justified despite considerable variation in the magnitude of the estimates.
Assessment of Cognitive Communications Interest Areas for NASA Needs and Benefits
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Madanayake, Arjuna
2017-01-01
This effort provides a survey and assessment of various cognitive communications interest areas, including node-to-node link optimization, intelligent routing/networking, and learning algorithms, and is conducted primarily from the perspective of NASA space communications needs and benefits. Areas of consideration include optimization methods, learning algorithms, and candidate implementations/technologies. Assessments of current research efforts are provided with mention of areas for further investment. Other considerations, such as antenna technologies and cognitive radio platforms, are briefly provided as well.
NASA Technical Reports Server (NTRS)
Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory
1995-01-01
The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.
Systems Analysis Initiated for All-Electric Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Kohout, Lisa L.
2003-01-01
A multidisciplinary effort is underway at the NASA Glenn Research Center to develop concepts for revolutionary, nontraditional fuel cell power and propulsion systems for aircraft applications. There is a growing interest in the use of fuel cells as a power source for electric propulsion as well as an auxiliary power unit to substantially reduce or eliminate environmentally harmful emissions. A systems analysis effort was initiated to assess potential concepts in an effort to identify those configurations with the highest payoff potential. Among the technologies under consideration are advanced proton exchange membrane (PEM) and solid oxide fuel cells, alternative fuels and fuel processing, and fuel storage. Prior to this effort, the majority of fuel cell analysis done at Glenn was done for space applications. Because of this, a new suite of models was developed. These models include the hydrogen-air PEM fuel cell; internal reforming solid oxide fuel cell; balance-of-plant components (compressor, humidifier, separator, and heat exchangers); compressed gas, cryogenic, and liquid fuel storage tanks; and gas turbine/generator models for hybrid system applications. Initial mass, volume, and performance estimates of a variety of PEM systems operating on hydrogen and reformate have been completed for a baseline general aviation aircraft. Solid oxide/turbine hybrid systems are being analyzed. In conjunction with the analysis efforts, a joint effort has been initiated with Glenn s Computer Services Division to integrate fuel cell stack and component models with the visualization environment that supports the GRUVE lab, Glenn s virtual reality facility. The objective of this work is to provide an environment to assist engineers in the integration of fuel cell propulsion systems into aircraft and provide a better understanding of the interaction between system components and the resulting effect on the overall design and performance of the aircraft. Initially, three-dimensional computer-aided design (CAD) models of representative PEM fuel cell stack and components were developed and integrated into the virtual reality environment along with an Excel-based model used to calculate fuel cell electrical performance on the basis of cell dimensions (see the figure). CAD models of a representative general aviation aircraft were also developed and added to the environment. With the use of special headgear, users will be able to virtually manipulate the fuel cell s physical characteristics and its placement within the aircraft while receiving information on the resultant fuel cell output power and performance. As the systems analysis effort progresses, we will add more component models to the GRUVE environment to help us more fully understand the effect of various system configurations on the aircraft.
Profiling Characteristics of Internet Medical Information Users
Weaver, James B.; Mays, Darren; Lindner, Gregg; Eroğlu, Doğan; Fridinger, Frederick; Bernhardt, Jay M.
2009-01-01
Objective The Internet's potential to bolster health promotion and disease prevention efforts has attracted considerable attention. Existing research leaves two things unclear, however: the prevalence of online health and medical information seeking and the distinguishing characteristics of individuals who seek that information. Design This study seeks to clarify and extend the knowledge base concerning health and medical information use online by profiling adults using Internet medical information (IMI). Secondary analysis of survey data from a large sample (n = 6,119) representative of the Atlanta, GA, area informed this investigation. Measurements Five survey questions were used to assess IMI use and general computer and Internet use during the 30 days before the survey was administered. Five questions were also used to assess respondents' health care system use. Several demographic characteristics were measured. Results Contrary to most prior research, this study found relatively low prevalence of IMI-seeking behavior. Specifically, IMI use was reported by 13.2% of all respondents (n = 6,119) and by 21.1% of respondents with Internet access (n = 3,829). Logistic regression models conducted among respondents accessing the Internet in the previous 30 days revealed that, when controlling for several sociodemographic characteristics, home computer ownership, online time per week, and health care system use are all positively linked with IMI-seeking behavior. Conclusions The data suggest it may be premature to embrace unilaterally the Internet as an effective asset for health promotion and disease prevention efforts that target the public. PMID:19567794
Cournia, Zoe; Allen, Bryce; Sherman, Woody
2017-12-26
Accurate in silico prediction of protein-ligand binding affinities has been a primary objective of structure-based drug design for decades due to the putative value it would bring to the drug discovery process. However, computational methods have historically failed to deliver value in real-world drug discovery applications due to a variety of scientific, technical, and practical challenges. Recently, a family of approaches commonly referred to as relative binding free energy (RBFE) calculations, which rely on physics-based molecular simulations and statistical mechanics, have shown promise in reliably generating accurate predictions in the context of drug discovery projects. This advance arises from accumulating developments in the underlying scientific methods (decades of research on force fields and sampling algorithms) coupled with vast increases in computational resources (graphics processing units and cloud infrastructures). Mounting evidence from retrospective validation studies, blind challenge predictions, and prospective applications suggests that RBFE simulations can now predict the affinity differences for congeneric ligands with sufficient accuracy and throughput to deliver considerable value in hit-to-lead and lead optimization efforts. Here, we present an overview of current RBFE implementations, highlighting recent advances and remaining challenges, along with examples that emphasize practical considerations for obtaining reliable RBFE results. We focus specifically on relative binding free energies because the calculations are less computationally intensive than absolute binding free energy (ABFE) calculations and map directly onto the hit-to-lead and lead optimization processes, where the prediction of relative binding energies between a reference molecule and new ideas (virtual molecules) can be used to prioritize molecules for synthesis. We describe the critical aspects of running RBFE calculations, from both theoretical and applied perspectives, using a combination of retrospective literature examples and prospective studies from drug discovery projects. This work is intended to provide a contemporary overview of the scientific, technical, and practical issues associated with running relative binding free energy simulations, with a focus on real-world drug discovery applications. We offer guidelines for improving the accuracy of RBFE simulations, especially for challenging cases, and emphasize unresolved issues that could be improved by further research in the field.
Rational design of an enzyme mutant for anti-cocaine therapeutics
NASA Astrophysics Data System (ADS)
Zheng, Fang; Zhan, Chang-Guo
2008-09-01
(-)-Cocaine is a widely abused drug and there is no available anti-cocaine therapeutic. The disastrous medical and social consequences of cocaine addiction have made the development of an effective pharmacological treatment a high priority. An ideal anti-cocaine medication would be to accelerate (-)-cocaine metabolism producing biologically inactive metabolites. The main metabolic pathway of cocaine in body is the hydrolysis at its benzoyl ester group. Reviewed in this article is the state-of-the-art computational design of high-activity mutants of human butyrylcholinesterase (BChE) against (-)-cocaine. The computational design of BChE mutants have been based on not only the structure of the enzyme, but also the detailed catalytic mechanisms for BChE-catalyzed hydrolysis of (-)-cocaine and (+)-cocaine. Computational studies of the detailed catalytic mechanisms and the structure-and-mechanism-based computational design have been carried out through the combined use of a variety of state-of-the-art techniques of molecular modeling. By using the computational insights into the catalytic mechanisms, a recently developed unique computational design strategy based on the simulation of the rate-determining transition state has been employed to design high-activity mutants of human BChE for hydrolysis of (-)-cocaine, leading to the exciting discovery of BChE mutants with a considerably improved catalytic efficiency against (-)-cocaine. One of the discovered BChE mutants (i.e., A199S/S287G/A328W/Y332G) has a ˜456-fold improved catalytic efficiency against (-)-cocaine. The encouraging outcome of the computational design and discovery effort demonstrates that the unique computational design approach based on the transition-state simulation is promising for rational enzyme redesign and drug discovery.
Man-machine interface issues in space telerobotics: A JPL research and development program
NASA Technical Reports Server (NTRS)
Bejczy, A. K.
1987-01-01
Technology issues related to the use of robots as man-extension or telerobot systems in space are discussed and exemplified. General considerations are presentd on control and information problems in space teleoperation and on the characteristics of Earth orbital teleoperation. The JPL R and D work in the area of man-machine interface devices and techniques for sensing and computer-based control is briefly summarized. The thrust of this R and D effort is to render space teleoperation efficient and safe through the use of devices and techniques which will permit integrated and task-level (intelligent) two-way control communication between human operator and telerobot machine in Earth orbit. Specific control and information display devices and techniques are discussed and exemplified with development results obtained at JPL in recent years.
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1993-01-01
This report contains the 1992 annual progress reports of the Research Fellows and students of the Center for Turbulence Research. Considerable effort was focused on the large eddy simulation technique for computing turbulent flows. This increased activity has been inspired by the recent predictive successes of the dynamic subgrid scale modeling procedure which was introduced during the 1990 Summer Program. Several Research Fellows and students are presently engaged in both the development of subgrid scale models and their applications to complex flows. The first group of papers in this report contain the findings of these studies. They are followed by reports grouped in the general areas of modeling, turbulence physics, and turbulent reacting flows. The last contribution in this report outlines the progress made on the development of the CTR post-processing facility.
Person-like intelligent systems architectures for robotic shared control and automated operations
NASA Technical Reports Server (NTRS)
Erickson, Jon D.; Aucoin, Paschal J., Jr.; Ossorio, Peter G.
1992-01-01
An approach to rendering robotic systems as 'personlike' as possible to achieve needed capabilities is outlined. Human characteristics such as knowledge, motivation, know-how, performance, achievement and individual differences corresponding to propensities and abilities can be supplied, within limits, with computing software and hardware to robotic systems provided with sufficiently rich sensory configurations. Pushing these limits is the developmental path for more and more personlike robotic systems. The portions of the Person Concept that appear to be most directly relevant to this effort are described in the following topics: reality concepts (the state-of-affairs system and descriptive formats, behavior as intentional action, individual persons (person characteristics), social patterns of behavior (social practices), and boundary conditions (status maxims). Personlike robotic themes and considerations for a technical development plan are also discussed.
Space missions for automation and robotics technologies (SMART) program
NASA Technical Reports Server (NTRS)
Ciffone, D. L.; Lum, H., Jr.
1985-01-01
The motivations, features and expected benefits and applications of the NASA SMART program are summarized. SMART is intended to push the state of the art in automation and robotics, a goal that Public Law 98-371 mandated be an inherent part of the Space Station program. The effort would first require tests of sensors, manipulators, computers and other subsystems as seeds for the evolution of flight-qualified subsystems. Consideration is currently being given to robotics systems as add-ons to the RMS, MMU and OMV and a self-contained automation and robotics module which would be tended by astronaut visits. Probable experimentation and development paths that would be pursued with the equipment are discussed, along with the management structure and procedures for the program. The first hardware flight is projected for 1989.
GridPP - Preparing for LHC Run 2 and the Wider Context
NASA Astrophysics Data System (ADS)
Coles, Jeremy
2015-12-01
This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.
Single coronary artery originating from the right sinus Valsalva and ability to work.
De Rosa, Roberto; Ratti, Gennaro; Gerardi, Donato; Tedeschi, Carlo; Lamberti, Monica
2015-01-01
We present a case of a 56-year-old male electrician who was admitted to the hospital with atrial fibrillation, atypical chest pain and dyspnea. He gave a history that on the morning he had working for almost 4 hours carrying out various activities with considerable physical effort. After cardioversion, conventional coronary angiography revealed a suspect of single coronary vessel (SCA) arising from the right sinus of Valsalva. The patient underwent multislice computed tomography that showed a SCA arising from the right sinus Valsalva and dividing in Right Coronary Artery (RCA) and Left Main coronary artery (LM). The finding of posterior course of the LM without atherosclerotic has proved crucial for the expression of an opinion of working capacity even with limitation.
Oxygen transport properties estimation by DSMC-CT simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
42 CFR 441.182 - Maintenance of effort: Computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
NASA Technical Reports Server (NTRS)
Schmidt, H.; Tango, G. J.; Werby, M. F.
1985-01-01
A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Navigation Ground Data System Engineering for the Cassini/Huygens Mission
NASA Technical Reports Server (NTRS)
Beswick, R. M.; Antreasian, P. G.; Gillam, S. D.; Hahn, Y.; Roth, D. C.; Jones, J. B.
2008-01-01
The launch of the Cassini/Huygens mission on October 15, 1997, began a seven year journey across the solar system that culminated in the entry of the spacecraft into Saturnian orbit on June 30, 2004. Cassini/Huygens Spacecraft Navigation is the result of a complex interplay between several teams within the Cassini Project, performed on the Ground Data System. The work of Spacecraft Navigation involves rigorous requirements for accuracy and completeness carried out often under uncompromising critical time pressures. To support the Navigation function, a fault-tolerant, high-reliability/high-availability computational environment was necessary to support data processing. Configuration Management (CM) was integrated with fault tolerant design and security engineering, according to the cornerstone principles of Confidentiality, Integrity, and Availability. Integrated with this approach are security benchmarks and validation to meet strict confidence levels. In addition, similar approaches to CM were applied in consideration of the staffing and training of the system administration team supporting this effort. As a result, the current configuration of this computational environment incorporates a secure, modular system, that provides for almost no downtime during tour operations.
Computational Studies of Snake Venom Toxins
Ojeda, Paola G.; Caballero, Julio; Kaas, Quentin; González, Wendy
2017-01-01
Most snake venom toxins are proteins, and participate to envenomation through a diverse array of bioactivities, such as bleeding, inflammation, and pain, cytotoxic, cardiotoxic or neurotoxic effects. The venom of a single snake species contains hundreds of toxins, and the venoms of the 725 species of venomous snakes represent a large pool of potentially bioactive proteins. Despite considerable discovery efforts, most of the snake venom toxins are still uncharacterized. Modern bioinformatics tools have been recently developed to mine snake venoms, helping focus experimental research on the most potentially interesting toxins. Some computational techniques predict toxin molecular targets, and the binding mode to these targets. This review gives an overview of current knowledge on the ~2200 sequences, and more than 400 three-dimensional structures of snake toxins deposited in public repositories, as well as of molecular modeling studies of the interaction between these toxins and their molecular targets. We also describe how modern bioinformatics have been used to study the snake venom protein phospholipase A2, the small basic myotoxin Crotamine, and the three-finger peptide Mambalgin. PMID:29271884
Statistical exchange-coupling errors and the practicality of scalable silicon donor qubits
NASA Astrophysics Data System (ADS)
Song, Yang; Das Sarma, S.
2016-12-01
Recent experimental efforts have led to considerable interest in donor-based localized electron spins in Si as viable qubits for a scalable silicon quantum computer. With the use of isotopically purified 28Si and the realization of extremely long spin coherence time in single-donor electrons, the recent experimental focus is on two-coupled donors with the eventual goal of a scaled-up quantum circuit. Motivated by this development, we simulate the statistical distribution of the exchange coupling J between a pair of donors under realistic donor placement straggles, and quantify the errors relative to the intended J value. With J values in a broad range of donor-pair separation ( 5 <|R |<60 nm), we work out various cases systematically, for a target donor separation R0 along the [001], [110] and [111] Si crystallographic directions, with |R0|=10 ,20 or 30 nm and standard deviation σR=1 ,2 ,5 or 10 nm. Our extensive theoretical results demonstrate the great challenge for a prescribed J gate even with just a donor pair, a first step for any scalable Si-donor-based quantum computer.
An efficient and scalable deformable model for virtual reality-based medical applications.
Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann
2004-09-01
Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
Efficient generation of connectivity in neuronal networks from simulator-independent descriptions
Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.
2014-01-01
Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620
Comparisons of Calculations with PARTRAC and NOREC: Transport of Electrons in Liquid Water
Dingfelder, M.; Ritchie, R. H.; Turner, J. E.; Friedland, W.; Paretzke, H. G.; Hamm, R. N.
2013-01-01
Monte Carlo computer models that simulate the detailed, event-by-event transport of electrons in liquid water are valuable for the interpretation and understanding of findings in radiation chemistry and radiation biology. Because of the paucity of experimental data, such efforts must rely on theoretical principles and considerable judgment in their development. Experimental verification of numerical input is possible to only a limited extent. Indirect support for model validity can be gained from a comparison of details between two independently developed computer codes as well as the observable results calculated with them. In this study, we compare the transport properties of electrons in liquid water using two such models, PARTRAC and NOREC. Both use interaction cross sections based on plane-wave Born approximations and a numerical parameterization of the complex dielectric response function for the liquid. The models are described and compared, and their similarities and differences are highlighted. Recent developments in the field are discussed and taken into account. The calculated stopping powers, W values, and slab penetration characteristics are in good agreement with one another and with other independent sources. PMID:18439039
Design Considerations of Polishing Lap for Computer-Controlled Cylindrical Polishing Process
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
The future X-ray observatory missions, such as International X-ray Observatory, require grazing incidence replicated optics of extremely large collecting area (3 m2) in combination with angular resolution of less than 5 arcsec half-power diameter. The resolution of a mirror shell depends ultimately on the quality of the cylindrical mandrels from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation studies have been performed to optimize the operational parameters as well as the polishing lap configuration. Furthermore, depending upon the surface error profile, a model for localized polishing based on dwell time approach is developed. Using the inputs from the mathematical model, a mandrel, having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. We report our first experimental results and discuss plans for further improvements in the polishing process.
Computer simulation and laboratory work in the teaching of mechanics
NASA Astrophysics Data System (ADS)
Borghi, L.; DeAmbrosis, A.; Mascheretti, P.; Massara, C. I.
1987-03-01
By analysing the measures of student success in learning the fundamentals of physics in conjunction with the research reported in the literature one can conclude that it is difficult or undergraduates as well as high-school students to gain a reasonable understanding of elementary mechanics. Considerable effort has been devoted to identifying those factors which might prevent mechanics being successfully learnt and also to developing instructional methods which could improve its teaching (Champagne et al. 1984, Hewson 1985, McDermott 1983, Saltiel and Malgrange 1980, Whitaker 1983, White 1983). Starting from these research results and drawing from their own experience (Borghi et al. 1984, 1985), they arrived at the following conclusions. A strategy based on experimental activity, performed by the students themselves, together with a proper use of computer simulations, could well improve the learning of mechanics and enhance the interest in, and understanding of, topics which are difficult to treat in a traditional way. The authors describe the strategy they have designed to help high school students to learn mechanics and report how they have applied this strategy to their particular topic of projectile motion.
A multiscale climate emulator for long-term morphodynamics (MUSCLE-morpho)
NASA Astrophysics Data System (ADS)
Antolínez, José Antonio A.; Méndez, Fernando J.; Camus, Paula; Vitousek, Sean; González, E. Mauricio; Ruggiero, Peter; Barnard, Patrick
2016-01-01
Interest in understanding long-term coastal morphodynamics has recently increased as climate change impacts become perceptible and accelerated. Multiscale, behavior-oriented and process-based models, or hybrids of the two, are typically applied with deterministic approaches which require considerable computational effort. In order to reduce the computational cost of modeling large spatial and temporal scales, input reduction and morphological acceleration techniques have been developed. Here we introduce a general framework for reducing dimensionality of wave-driver inputs to morphodynamic models. The proposed framework seeks to account for dependencies with global atmospheric circulation fields and deals simultaneously with seasonality, interannual variability, long-term trends, and autocorrelation of wave height, wave period, and wave direction. The model is also able to reproduce future wave climate time series accounting for possible changes in the global climate system. An application of long-term shoreline evolution is presented by comparing the performance of the real and the simulated wave climate using a one-line model. This article was corrected on 2 FEB 2016. See the end of the full text for details.
Virtual terrain: a security-based representation of a computer network
NASA Astrophysics Data System (ADS)
Holsopple, Jared; Yang, Shanchieh; Argauer, Brian
2008-03-01
Much research has been put forth towards detection, correlating, and prediction of cyber attacks in recent years. As this set of research progresses, there is an increasing need for contextual information of a computer network to provide an accurate situational assessment. Typical approaches adopt contextual information as needed; yet such ad hoc effort may lead to unnecessary or even conflicting features. The concept of virtual terrain is, therefore, developed and investigated in this work. Virtual terrain is a common representation of crucial information about network vulnerabilities, accessibilities, and criticalities. A virtual terrain model encompasses operating systems, firewall rules, running services, missions, user accounts, and network connectivity. It is defined as connected graphs with arc attributes defining dynamic relationships among vertices modeling network entities, such as services, users, and machines. The virtual terrain representation is designed to allow feasible development and maintenance of the model, as well as efficacy in terms of the use of the model. This paper will describe the considerations in developing the virtual terrain schema, exemplary virtual terrain models, and algorithms utilizing the virtual terrain model for situation and threat assessment.
Single crystal substrates for surface acoustic wave devices
NASA Astrophysics Data System (ADS)
Barsch, G. R.; Spear, K. E.
1981-01-01
In order to search for new temperature compensated materials for surface acoustic wave (SAW) devices with low ultrasonic attenuation and high electromechanical coupling, the following experimental and theoretical investigations were carried out: (1) Crystal growth research centered around: designing, constructing, and writing the software for a computer controlled constant-diameter attachment for our Czochralski crystal pullers; a major experimental effort on the growth of lead potassium niobate (PKN); Pb2KNb5O15, and lead bismuth niobate (PBN) PbBi2Nb2O9, and a minor experimental effort on the growth of lithium metasilicate, Li2SiO3; and bismuth molybdate, Bi2MoO6. (2) The dielectric constants and the associated loss tangents of alpha-berlinite were measured at eleven frequencies from 100 to 10,000 Hz between -150 and 200 C. The temperature dependence of the dielectric constants and the relaxation behavior are similar to the results obtained earlier, but the absolute values are 20 to 30 percent smaller than reported previously. (3) The temperature dependence of the two shear modes propagating in (001) has been measured from 10 to 315K for Bi4Ti3O12. A monotonical decrease of the associated shear moduli has been found. (4) Considerable effort was devoted to specimen preparation of lead bismuth niobate which was hampered by the easy cleavage of this material perpendicular to 001 .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Buchanan, Elizabeth; Aycock, John; Dexter, Scott; Dittrich, David; Hvizdak, Erin
2011-06-01
This paper explores the growing concerns with computer science research, and in particular, computer security research and its relationship with the committees that review human subjects research. It offers cases that review boards are likely to confront, and provides a context for appropriate consideration of such research, as issues of bots, clouds, and worms enter the discourse of human subjects review.
Teaching Professionalism: Passing the Torch.
ERIC Educational Resources Information Center
Hensel, William A.; Dickey, Nancy W.
1998-01-01
Medical faculty must ensure that students understand the appropriate balance between financial and professional considerations. Faculty should place financial considerations in proper perspective and should teach the basic components of professionalism, how current cost-containment efforts may threaten medicine's professional status, appropriate…
Organizational strategies for protection against back contamination
NASA Technical Reports Server (NTRS)
Mahoney, T. A.
1976-01-01
The organizational issues pertaining to the prevention of inbound contamination associated with possible Viking missions to Mars are considered. The completed Apollo missions, which returned samples from the moon, provide a convenient base for analysis of inbound contamination issues. Despite concern over the threat of inbound contamination from the moon and efforts to prevent back contamination, the back contamination efforts in the Apollo missions were considered ineffective had these missions encountered living organisms. Several alternatives for consideration in the design of future programs dealing with back contamination are examined and proposed for special consideration.
Montague, P. Read; Dolan, Raymond J.; Friston, Karl J.; Dayan, Peter
2013-01-01
Computational ideas pervade many areas of science and have an integrative explanatory role in neuroscience and cognitive science. However, computational depictions of cognitive function have had surprisingly little impact on the way we assess mental illness because diseases of the mind have not been systematically conceptualized in computational terms. Here, we outline goals and nascent efforts in the new field of computational psychiatry, which seeks to characterize mental dysfunction in terms of aberrant computations over multiple scales. We highlight early efforts in this area that employ reinforcement learning and game theoretic frameworks to elucidate decision-making in health and disease. Looking forwards, we emphasize a need for theory development and large-scale computational phenotyping in human subjects. PMID:22177032
Edge-diffraction effects in RCS predictions and their importance in systems analysis
NASA Astrophysics Data System (ADS)
Friess, W. F.; Klement, D.; Ruppel, M.; Stein, Volker
1996-06-01
In developing RCS prediction codes a variety of physical effects such as the edge diffraction effect have to be considered with the consequence that the computer effort increases considerably. This fact limits the field of application of such codes, especially if the RCS data serve as input parameters for system simulators which very often need these data for a high number of observation angles and/or frequencies. Vice versa the issues of a system analysis can be used to estimate the relevance of physical effects under system viewpoints and to rank them according to their magnitude. This paper tries to evaluate the importance of RCS predictions containing an edge diffracted field for systems analysis. A double dihedral with a strong depolarizing behavior and a generic airplane design containing many arbitrarily oriented edges are used as test structures. Data of the scattered field are generated by the RCS computer code SIGMA with and without including edge diffraction effects. These data are submitted to the code DORA to determine radar range and radar detectibility and to a SAR simulator code to generate SAR imagery. In both cases special scenarios are assumed. The essential features of the computer codes in their current state are described, the results are presented and discussed under systems viewpoints.
NASA Astrophysics Data System (ADS)
Casartelli, E.; Mangani, L.; Ryan, O.; Schmid, A.
2016-11-01
CFD has entered the product development process in hydraulic machines since more than three decades. Beside the actual design process, in which the most appropriate geometry for a certain task is iteratively sought, several steady-state simulations and related analyses are performed with the help of CFD. Basic transient CFD-analysis is becoming more and more routine for rotor-stator interaction assessment, but in general unsteady CFD is still not standard due to the large computational effort. Especially for FSI simulations, where mesh motion is involved, a considerable amount of computational time is necessary for the mesh handling and deformation as well as the related unsteady flow field resolution. Therefore this kind of CFD computations are still unusual and mostly performed during trouble-shooting analysis rather than in the standard development process, i.e. in order to understand what went wrong instead of preventing failure or even better to increase the available knowledge. In this paper the application of an efficient and particularly robust algorithm for fast computations with moving mesh is presented for the analysis of transient effects encountered during highly dynamic procedures in the operation of a pump-turbine, like runaway at fixed GV position and load-rejection with GV motion imposed as one-way FSI. In both cases the computations extend through the S-shape of the machine in the turbine-brake and reverse pump domain, showing that such exotic computations can be perform on a more regular base, even if quite time consuming. Beside the presentation of the procedure and global results, some highlights in the encountered flow-physics are also given.
NASA Astrophysics Data System (ADS)
Stout, Jane G.; Blaney, Jennifer M.
2017-10-01
Research suggests growth mindset, or the belief that knowledge is acquired through effort, may enhance women's sense of belonging in male-dominated disciplines, like computing. However, other research indicates women who spend a great deal of time and energy in technical fields experience a low sense of belonging. The current study assessed the benefits of a growth mindset on women's (and men's) sense of intellectual belonging in computing, accounting for the amount of time and effort dedicated to academics. We define "intellectual belonging" as the sense that one is believed to be a competent member of the community. Whereas a stronger growth mindset was associated with stronger intellectual belonging for men, a growth mindset only boosted women's intellectual belonging when they did not work hard on academics. Our findings suggest, paradoxically, women may not benefit from a growth mindset in computing when they exert a lot of effort.
Vassena, Eliana; Deraeve, James; Alexander, William H
2017-10-01
Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO model based on hierarchical error prediction, developed to explain MPFC-DLPFC interactions. We derive behavioral predictions that describe how effort and reward information is coded in PFC and how changing the configuration of such environmental information might affect decision-making and task performance involving motivation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Human factors considerations for integrating traffic information on airport moving maps.
DOT National Transportation Integrated Search
2011-05-01
The purpose of this research effort was to identify human factors considerations in the integration of traffic information and surface indications and alerts for runway status on airport moving maps for flight deck displays. The information is primar...
Cost Considerations in Nonlinear Finite-Element Computing
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.
1985-01-01
Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.
New Mexico district work-effort analysis computer program
Hiss, W.L.; Trantolo, A.P.; Sparks, J.L.
1972-01-01
The computer program (CAN 2) described in this report is one of several related programs used in the New Mexico District cost-analysis system. The work-effort information used in these programs is accumulated and entered to the nearest hour on forms completed by each employee. Tabulating cards are punched directly from these forms after visual examinations for errors are made. Reports containing detailed work-effort data itemized by employee within each project and account and by account and project for each employee are prepared for both current-month and year-to-date periods by the CAN 2 computer program. An option allowing preparation of reports for a specified 3-month period is provided. The total number of hours worked on each account and project and a grand total of hours worked in the New Mexico District is computed and presented in a summary report for each period. Work effort not chargeable directly to individual projects or accounts is considered as overhead and can be apportioned to the individual accounts and projects on the basis of the ratio of the total hours of work effort for the individual accounts or projects to the total New Mexico District work effort at the option of the user. The hours of work performed by a particular section, such as General Investigations or Surface Water, are prorated and charged to the projects or accounts within the particular section. A number of surveillance or buffer accounts are employed to account for the hours worked on special events or on those parts of large projects or accounts that require a more detailed analysis. Any part of the New Mexico District operation can be separated and analyzed in detail by establishing an appropriate buffer account. With the exception of statements associated with word size, the computer program is written in FORTRAN IV in a relatively low and standard language level to facilitate its use on different digital computers. The program has been run only on a Control Data Corporation 6600 computer system. Central processing computer time has seldom exceeded 5 minutes on the longest year-to-date runs.
Design Considerations For Imaging Charge-Coupled Device (ICCD) Star Sensors
NASA Astrophysics Data System (ADS)
McAloon, K. J.
1981-04-01
A development program is currently underway to produce a precision star sensor using imaging charge coupled device (ICCD) technology. The effort is the critical component development phase for the Air Force Multi-Mission Attitude Determination and Autonomous Navigation System (MADAN). A number of unique considerations have evolved in designing an arcsecond accuracy sensor around an ICCD detector. Three tiers of performance criteria are involved: at the spacecraft attitude determination system level, at the star sensor level, and at the detector level. Optimum attitude determination system performance involves a tradeoff between Kalman filter iteration time and sensor ICCD integration time. The ICCD star sensor lends itself to the use of a new approach in the functional interface between the attitude determination system and the sensor. At the sensor level image data processing tradeoffs are important for optimum sensor performance. These tradeoffs involve the sensor optic configuration, the optical point spread function (PSF) size and shape, the PSF position locator, and the microprocessor locator algorithm. Performance modelling of the sensor mandates the use of computer simulation programs. Five key performance parameters at the ICCD detector level are defined. ICCD error characteristics have also been isolated to five key parameters.
Case for real-time systems development - Quo vadis?
NASA Technical Reports Server (NTRS)
Erb, Dona M.
1991-01-01
The paper focuses on the distinctive issues of computer-aided software engineering (CASE) products for the development of real-time systems. CASE technologies and associated standardization efforts are evolving from sets of conflicting interests. The majority of case products are intended for use in the development of management information systems. CASE products to support the development of large, complex real-time systems must provide additional capabilities. Generic concerns include the quality of the implementation of the required method for the phase of the system's development and whether the vendor is stable and committed to evolving the products in parallel with nonproprietary standards. The CASE market is undergoing considerable consolidation. The paper describes the major forces, cooperating entities, and remaining uncertainties that need to be weighed in near-term CASE procurements to limit risk of loss of investment in project time, trianing, and money.
Garza, Sergio
1982-01-01
Two-dimensional digital-computer models were developed for aquifer simulation of steady and transient conditions in which the density effects of salt water are considered. The models were used to project the effects of the 100- year impoundment of salt water in Kiowa Peak Lake and Croton Lake on the freshwater system. Rises in aquifer head of 10 to 50 feet are projected only for areas near each dan and along each lake shoreline. The maximum migration of salt water downstream from each dam is projected to be about 1 mile. The modeling efforts in this study did not include the effects of hydrodynamic dispersion nor consideration of possible changes in the hydraulic conductivity of the aquifer due to physical and chemical interactions in the salt-water and fresh-water environments.
Automated Monitoring and Analysis of Social Behavior in Drosophila
Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro
2009-01-01
We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery such as inlets, ramjets, and scramjets. The discussion is separated into four sections: (1) computational fluid dynamics model for the entire nonlinear system or high order nonlinear models; (2) high order linearized model derived from fundamental physics; (3) low order linear models obtained from other high order models; and (4) low order nonlinear models. Included are special considerations on any relevant control system designs. The methods discussed are for the quasi-one dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, moving normal shocks, hammershocks, subsonic combustion via heat addition, temperature dependent gases, detonation, and thermal choking.
Model Development for VDE Computations in NIMROD
NASA Astrophysics Data System (ADS)
Bunkers, K. J.; Sovinec, C. R.
2017-10-01
Vertical displacement events (VDEs) and the disruptions associated with them have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on simulations of generic axisymmetric VDEs and a vertically unstable case from Alcator C-MOD using the NIMROD code. Previous calculations have been done with closures for heat flux and viscous stress. Initial calculations show that halo current width is dependent on temperature boundary conditions, and so transport together with plasma-surface interaction may play a role in determining halo currents in experiments. The behavior of VDEs with Braginskii thermal conductivity and viscosity closures and Spitzer-like resistivity are investigated for both the generic axisymmetric VDE case and the C-MOD case. This effort is supported by the U.S. Dept. of Energy, Award Numbers DE-FG02-06ER54850 and DE-FC02-08ER54975.
Using MODIS Terra 250 m Imagery to Map Concentrations of Total Suspended Matter in Coastal Waters
NASA Technical Reports Server (NTRS)
Miller, Richard L.; McKee, Brent A.
2004-01-01
High concentrations of suspended particulate matter in coastal waters directly effect or govern numerous water column and benthic processes. The concentration of suspended sediments derived from bottom sediment resuspension or discharge of sediment-laden rivers is highly variable over a wide range of time and space scales. Although there has been considerable effort to use remotely sensed images to provide synoptic maps of suspended particulate matter, there are limited routine applications of this technology due in-part to the low spatial resolution, long revisit period, or cost of most remotely sensed data. In contrast, near daily coverage of medium-resolution data is available from the MODIS Terra instrument without charge from several data distribution gateways. Equally important, several display and processing programs are available that operate on low cost computers.
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
[Earth Science Technology Office's Computational Technologies Project
NASA Technical Reports Server (NTRS)
Fischer, James (Technical Monitor); Merkey, Phillip
2005-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Racial Differences in the Prediction of Class ’A’ School Grades
1975-06-01
is the latest in a series of efforts to provide the educationally disadvantaged with an opportunity for technical training in...to find new ways to measure the talents of the educationally disadvantaged and train them in an appropriate rating. A recent effort looked at...has expended considerable research effort at- tempting to increase the number of educationally disadvantaged personnel selected for technical
Computational Methods for Stability and Control (COMSAC): The Time Has Come
NASA Technical Reports Server (NTRS)
Hall, Robert M.; Biedron, Robert T.; Ball, Douglas N.; Bogue, David R.; Chung, James; Green, Bradford E.; Grismer, Matthew J.; Brooks, Gregory P.; Chambers, Joseph R.
2005-01-01
Powerful computational fluid dynamics (CFD) tools have emerged that appear to offer significant benefits as an adjunct to the experimental methods used by the stability and control community to predict aerodynamic parameters. The decreasing costs for and increasing availability of computing hours are making these applications increasingly viable as time goes on and the cost of computing continues to drop. This paper summarizes the efforts of four organizations to utilize high-end computational fluid dynamics (CFD) tools to address the challenges of the stability and control arena. General motivation and the backdrop for these efforts will be summarized as well as examples of current applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
A compendium of computational fluid dynamics at the Langley Research Center
NASA Technical Reports Server (NTRS)
1980-01-01
Through numerous summary examples, the scope and general nature of the computational fluid dynamics (CFD) effort at Langley is identified. These summaries will help inform researchers in CFD and line management at Langley of the overall effort. In addition to the inhouse efforts, out of house CFD work supported by Langley through industrial contracts and university grants are included. Researchers were encouraged to include summaries of work in preliminary and tentative states of development as well as current research approaching definitive results.
Chivers, Laura L.; Higgins, Stephen T.
2016-01-01
Background Behavioral economics research has revealed systematic biases in decision making that merit consideration in efforts to promote money management skills among those with substance use disorders (SUDs). Objectives The objective of this article was to briefly review the literature on five of those biases (i.e., hyperbolic delay discounting, defaults and preference for the status quo, loss aversion, mental accounting, and failure to account for opportunity cost) that may have particular relevance to the topic of money management. Methods Selected studies are reviewed to illustrate these biases and how they may relate to efforts to promote money management skills among those with substance use disorders. Studies were identified by searching PubMed using the terms “behavioral economics” and “substance use disorders”, reviewing bibliographies of published articles, and discussions with colleagues. Results Only one of these biases (i.e., hyperbolic delay discounting) has been investigated extensively among those with SUDs. Indeed, it has been found to be sufficiently prevalent among those with SUDs to be considered as a potential risk factor for those disorders and certainly merits careful consideration in efforts to improve money management skills in that population. There has been relatively little empirical research reported regarding the other biases among those with SUDs, although they appear to be sufficiently fundamental to human behavior and relevant to the topic of money management (e.g., loss aversion) to also merit consideration. There is precedent of effective leveraging of behavioral economics principles in treatment development for SUDs (e.g., contingency management), including at least one intervention that explicitly focuses on money management (i.e., advisor–teller money management therapy). Conclusions and Scientific Significance The consideration of the systematic biases in human decision making that have been revealed in behavioral economics research has the potential to enhance efforts to devise effective strategies for improving money management skills among those with SUDs. PMID:22211484
Chivers, Laura L; Higgins, Stephen T
2012-01-01
Behavioral economics research has revealed systematic biases in decision making that merit consideration in efforts to promote money management skills among those with substance use disorders (SUDs). The objective of this article was to briefly review the literature on five of those biases (i.e., hyperbolic delay discounting, defaults and preference for the status quo, loss aversion, mental accounting, and failure to account for opportunity cost) that may have particular relevance to the topic of money management. Selected studies are reviewed to illustrate these biases and how they may relate to efforts to promote money management skills among those with substance use disorders. Studies were identified by searching PubMed using the terms "behavioral economics" and "substance use disorders", reviewing bibliographies of published articles, and discussions with colleagues. Only one of these biases (i.e., hyperbolic delay discounting) has been investigated extensively among those with SUDs. Indeed, it has been found to be sufficiently prevalent among those with SUDs to be considered as a potential risk factor for those disorders and certainly merits careful consideration in efforts to improve money management skills in that population. There has been relatively little empirical research reported regarding the other biases among those with SUDs, although they appear to be sufficiently fundamental to human behavior and relevant to the topic of money management (e.g., loss aversion) to also merit consideration. There is precedent of effective leveraging of behavioral economics principles in treatment development for SUDs (e.g., contingency management), including at least one intervention that explicitly focuses on money management (i.e., advisor-teller money management therapy). The consideration of the systematic biases in human decision making that have been revealed in behavioral economics research has the potential to enhance efforts to devise effective strategies for improving money management skills among those with SUDs.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
Motivational Beliefs, Student Effort, and Feedback Behaviour in Computer-Based Formative Assessment
ERIC Educational Resources Information Center
Timmers, Caroline F.; Braber-van den Broek, Jannie; van den Berg, Stephanie M.
2013-01-01
Feedback can only be effective when students seek feedback and process it. This study examines the relations between students' motivational beliefs, effort invested in a computer-based formative assessment, and feedback behaviour. Feedback behaviour is represented by whether a student seeks feedback and the time a student spends studying the…
Establishing a K-12 Circuit Design Program
ERIC Educational Resources Information Center
Inceoglu, Mustafa M.
2010-01-01
Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…
ERIC Educational Resources Information Center
Sexton, Randall; Hignite, Michael; Margavio, Thomas M.; Margavio, Geanie W.
2009-01-01
Information Literacy is a concept that evolved as a result of efforts to move technology-based instructional and research efforts beyond the concepts previously associated with "computer literacy." While computer literacy was largely a topic devoted to knowledge of hardware and software, information literacy is concerned with students' abilities…
AlaScan: A Graphical User Interface for Alanine Scanning Free-Energy Calculations.
Ramadoss, Vijayaraj; Dehez, François; Chipot, Christophe
2016-06-27
Computation of the free-energy changes that underlie molecular recognition and association has gained significant importance due to its considerable potential in drug discovery. The massive increase of computational power in recent years substantiates the application of more accurate theoretical methods for the calculation of binding free energies. The impact of such advances is the application of parent approaches, like computational alanine scanning, to investigate in silico the effect of amino-acid replacement in protein-ligand and protein-protein complexes, or probe the thermostability of individual proteins. Because human effort represents a significant cost that precludes the routine use of this form of free-energy calculations, minimizing manual intervention constitutes a stringent prerequisite for any such systematic computation. With this objective in mind, we propose a new plug-in, referred to as AlaScan, developed within the popular visualization program VMD to automate the major steps in alanine-scanning calculations, employing free-energy perturbation as implemented in the widely used molecular dynamics code NAMD. The AlaScan plug-in can be utilized upstream, to prepare input files for selected alanine mutations. It can also be utilized downstream to perform the analysis of different alanine-scanning calculations and to report the free-energy estimates in a user-friendly graphical user interface, allowing favorable mutations to be identified at a glance. The plug-in also assists the end-user in assessing the reliability of the calculation through rapid visual inspection.
Oligomerization of G protein-coupled receptors: computational methods.
Selent, J; Kaczor, A A
2011-01-01
Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.
ELECTROFISHING DISTANCE AND NUMBER OF SPECIES COLLECTED FROM THREE RAFTABLE WESTERN RIVERS
A key issue in assessing a fish assemblage at a site is determining a sufficient sampling effort to adequately represent the species in an assemblage. Inadequate effort produces considerable noise in multiple samples at the site or under-represents the species present. Excessiv...
Statistical considerations in monitoring birds over large areas
Johnson, D.H.
2000-01-01
The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.
Systematic assignment of thermodynamic constraints in metabolic network models
Kümmel, Anne; Panke, Sven; Heinemann, Matthias
2006-01-01
Background The availability of genome sequences for many organisms enabled the reconstruction of several genome-scale metabolic network models. Currently, significant efforts are put into the automated reconstruction of such models. For this, several computational tools have been developed that particularly assist in identifying and compiling the organism-specific lists of metabolic reactions. In contrast, the last step of the model reconstruction process, which is the definition of the thermodynamic constraints in terms of reaction directionalities, still needs to be done manually. No computational method exists that allows for an automated and systematic assignment of reaction directions in genome-scale models. Results We present an algorithm that – based on thermodynamics, network topology and heuristic rules – automatically assigns reaction directions in metabolic models such that the reaction network is thermodynamically feasible with respect to the production of energy equivalents. It first exploits all available experimentally derived Gibbs energies of formation to identify irreversible reactions. As these thermodynamic data are not available for all metabolites, in a next step, further reaction directions are assigned on the basis of network topology considerations and thermodynamics-based heuristic rules. Briefly, the algorithm identifies reaction subsets from the metabolic network that are able to convert low-energy co-substrates into their high-energy counterparts and thus net produce energy. Our algorithm aims at disabling such thermodynamically infeasible cyclic operation of reaction subnetworks by assigning reaction directions based on a set of thermodynamics-derived heuristic rules. We demonstrate our algorithm on a genome-scale metabolic model of E. coli. The introduced systematic direction assignment yielded 130 irreversible reactions (out of 920 total reactions), which corresponds to about 70% of all irreversible reactions that are required to disable thermodynamically infeasible energy production. Conclusion Although not being fully comprehensive, our algorithm for systematic reaction direction assignment could define a significant number of irreversible reactions automatically with low computational effort. We envision that the presented algorithm is a valuable part of a computational framework that assists the automated reconstruction of genome-scale metabolic models. PMID:17123434
Impact of remote sensing upon the planning, management, and development of water resources
NASA Technical Reports Server (NTRS)
Loats, H. L.; Fowler, T. R.; Frech, S. L.
1974-01-01
A survey of the principal water resource users was conducted to determine the impact of new remote data streams on hydrologic computer models. The analysis of the responses and direct contact demonstrated that: (1) the majority of water resource effort of the type suitable to remote sensing inputs is conducted by major federal water resources agencies or through federally stimulated research, (2) the federal government develops most of the hydrologic models used in this effort; and (3) federal computer power is extensive. The computers, computer power, and hydrologic models in current use were determined.
An opportunity cost model of subjective effort and task performance
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus
2013-01-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Computational Models of Anterior Cingulate Cortex: At the Crossroads between Prediction and Effort.
Vassena, Eliana; Holroyd, Clay B; Alexander, William H
2017-01-01
In the last two decades the anterior cingulate cortex (ACC) has become one of the most investigated areas of the brain. Extensive neuroimaging evidence suggests countless functions for this region, ranging from conflict and error coding, to social cognition, pain and effortful control. In response to this burgeoning amount of data, a proliferation of computational models has tried to characterize the neurocognitive architecture of ACC. Early seminal models provided a computational explanation for a relatively circumscribed set of empirical findings, mainly accounting for EEG and fMRI evidence. More recent models have focused on ACC's contribution to effortful control. In parallel to these developments, several proposals attempted to explain within a single computational framework a wider variety of empirical findings that span different cognitive processes and experimental modalities. Here we critically evaluate these modeling attempts, highlighting the continued need to reconcile the array of disparate ACC observations within a coherent, unifying framework.
The use of the analytic hierarchy process to aid decision making in acquired equinovarus deformity.
van Til, Janine A; Renzenbrink, Gerbert J; Dolan, James G; Ijzerman, Maarten J
2008-03-01
To increase the transparency of decision making about treatment in patients with equinovarus deformity poststroke. The analytic hierarchy process (AHP) was used as a structured methodology to study the subjective rationale behind choice of treatment. An 8-hour meeting at a centrally located rehabilitation center in The Netherlands, during which a patient video was shown to all participants (using a personal computer and a large screen) and the patient details were provided on paper. A panel of 10 health professionals from different backgrounds. Not applicable. The performance of the applicable treatments on outcome, impact, comfort, cosmetics, daily effort, and risks and side effects of treatment, as well as the relative importance of criteria in the choice of treatment. According to the model, soft-tissue surgery (.413) ranked first as the preferred treatment, followed by orthopedic footwear (.181), ankle-foot orthosis (.147), surface electrostimulation (.137), and finally implanted electrostimulation (.123). Outcome was the most influential consideration affecting treatment choice (.509), followed by risk and side effects (.194), comfort (.104), daily effort (.098), cosmetics (.065), and impact of treatment (.030). Soft-tissue surgery was judged best on outcome, daily effort, comfortable shoe wear, and cosmetically acceptable result and was thereby preferred as a treatment alternative by the panel in this study. In contrast, orthosis and orthopedic footwear are usually preferred in daily practice. The AHP method was found to be suitable methodology for eliciting subjective opinions and quantitatively comparing treatments in the absence of scientific evidence.
[Earth and Space Sciences Project Services for NASA HPCC
NASA Technical Reports Server (NTRS)
Merkey, Phillip
2002-01-01
This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.
Automated Estimation Of Software-Development Costs
NASA Technical Reports Server (NTRS)
Roush, George B.; Reini, William
1993-01-01
COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-15
... perchlorate and carcinogenic volatile organic compounds (VOCs). While the Agency is in the very preliminary stages of developing the regulatory efforts for perchlorate and carcinogenic VOCs, EPA plans to discuss..., Regulatory Determinations 3, perchlorate, and carcinogenic VOCs rulemaking efforts. Date and Location: The...
Content Area Textbooks. Reading Education Report No. 23.
ERIC Educational Resources Information Center
Armbruster, Bonnie B.; Anderson, Thomas H.
Focusing on what authors can do to facilitate learning from content area textbooks, this report labels authors as "considerate," providing text that readers can understand with a minimum of cognitive effort, or as "inconsiderate," creating text that requires a conscientious, highly skilled effort if readers are to comprehend…
Toward understanding the ecological impact of transportation corridors
Victoria J. Bennett; Winston P. Smith; Matthew G. Betts
2011-01-01
Transportation corridors (notably roads) affect wildlife habitat, populations, and entire ecosystems. Considerable effort has been expended to quantify direct effects of roads on wildlife populations and ecological communities and processes. Much less effort has been expended toward quantifying indirect effects. In this report, we provide a comprehensive review of road...
The President's Role in Advancing Civic Engagement: The Widener-Chester Partnership
ERIC Educational Resources Information Center
Harris, James T., III
2009-01-01
Efforts by metropolitan universities to engage in meaningful and democratic partnerships with community organizations require much time, effort, and considerable resources from the university and its various constituents. Widener University is located in a distressed urban environment. This study, presented from the perspective of the university's…
Public Schools as Partners in Rural Development: Considerations for Policymakers.
ERIC Educational Resources Information Center
Harmon, Hobart L.
This paper describes four considerations for policymakers who wish to have public schools serve as viable partners in the rural development efforts of their communities. First, schools are a community resource. When rural students are given opportunities to engage in community-based learning, they develop responsible citizenship and leadership…
Rocky Mountain Research Station USDA Forest Service
2004-01-01
Effective public education and communication campaigns about wildland fire and fuels management should have clear objectives, and use the right techniques to achieve these objectives. This fact sheet lists seven important considerations for planning or implementing a hazard communication effort.
Brian R. Lockhart; Emile S. Gardiner; Theodore D. Leininger; Kristina F. Connor; Paul B. Hamel; Nathan M. Schiff; A. Dan Wilson; Margaret S. Devall
2006-01-01
Bottomland hardwood ecosystems, important for their unique functions and values, have experienced considerable degradation since European settlement through deforestation, development, and drainage. Currently, considerable effort is underway to restore ecological functions on degraded bottomland sites. Restoration requires a better understanding of the biological...
Practical Considerations in Evaluating Patient/Consumer Health Education Programs.
ERIC Educational Resources Information Center
Bryant, Nancy H.
This report contains brief descriptions of seven evaluative efforts and outcomes of health education programs, some considerations of problems encountered in evaluating the programs, and detailed descriptions of two case studies: (1) a process evaluation of preoperative teaching and (2) a retrospective study of visiting nurse association use by…
2013-01-01
Background Information and communication technologies (ICTs) are often proposed as ‘technological fixes’ for problems facing healthcare. They promise to deliver services more quickly and cheaply. Yet research on the implementation of ICTs reveals a litany of delays, compromises and failures. Case studies have established that these technologies are difficult to embed in everyday healthcare. Methods We undertook an ethnographic comparative analysis of a single computer decision support system in three different settings to understand the implementation and everyday use of this technology which is designed to deal with calls to emergency and urgent care services. We examined the deployment of this technology in an established 999 ambulance call-handling service, a new single point of access for urgent care and an established general practice out-of-hours service. We used Normalization Process Theory as a framework to enable systematic cross-case analysis. Results Our data comprise nearly 500 hours of observation, interviews with 64 call-handlers, and stakeholders and documents about the technology and settings. The technology has been implemented and is used distinctively in each setting reflecting important differences between work and contexts. Using Normalisation Process Theory we show how the work (collective action) of implementing the system and maintaining its routine use was enabled by a range of actors who established coherence for the technology, secured buy-in (cognitive participation) and engaged in on-going appraisal and adjustment (reflexive monitoring). Conclusions Huge effort was expended and continues to be required to implement and keep this technology in use. This innovation must be understood both as a computer technology and as a set of practices related to that technology, kept in place by a network of actors in particular contexts. While technologies can be ‘made to work’ in different settings, successful implementation has been achieved, and will only be maintained, through the efforts of those involved in the specific settings and if the wider context continues to support the coherence, cognitive participation, and reflective monitoring processes that surround this collective action. Implementation is more than simply putting technologies in place – it requires new resources and considerable effort, perhaps on an on-going basis. PMID:23522021
Thille, Arnaud W.; Lyazidi, Aissam; Richard, Jean-Christophe M.; Galia, Fabrice; Brochard, Laurent
2009-01-01
Objective To compare 13 commercially available, new-generation, intensive-care-unit (ICU) ventilators regarding trigger function, pressurization capacity during pressure-support ventilation (PSV), accuracy of pressure measurements and expiratory resistance. Design and Setting Bench study at a research laboratory in a university hospital. Material Four turbine-based ventilators and nine conventional servo-valve compressed-gas ventilators were tested using a two-compartment lung model. Results Three levels of effort were simulated. Each ventilator was evaluated at four PSV levels (5, 10, 15, and 20 cm H2O), with and without positive end-expiratory pressure (5 cm H2O, Trigger function was assessed as the time from effort onset to detectable pressurization. Pressurization capacity was evaluated using the airway pressure-time product computed as the net area under the pressure-time curve over the first 0.3 s after inspiratory effort onset. Expiratory resistance was evaluated by measuring trapped volume in controlled ventilation. Significant differences were found across the ventilators, with a range of triggering-delay from 42 ms to 88 ms for all conditions averaged (P<.001). Under difficult conditions, the triggering delay was longer than 100 ms and the pressurization was poor with five ventilators at PSV5 and three at PSV10, suggesting an inability to unload patient’s effort. On average, turbine-based ventilators performed better than conventional ventilators, which showed no improvement compared to a 2000 bench comparison. Conclusion Technical performances of trigger function, pressurization capacity and expiratory resistance vary considerably across new-generation ICU ventilators. ICU ventilators seem to have reached a technical ceiling in recent years, and some ventilators still perform inadequately. PMID:19352622
Thille, Arnaud W; Lyazidi, Aissam; Richard, Jean-Christophe M; Galia, Fabrice; Brochard, Laurent
2009-08-01
To compare 13 commercially available, new-generation, intensive-care-unit (ICU) ventilators in terms of trigger function, pressurization capacity during pressure-support ventilation (PSV), accuracy of pressure measurements, and expiratory resistance. Bench study at a research laboratory in a university hospital. Four turbine-based ventilators and nine conventional servo-valve compressed-gas ventilators were tested using a two-compartment lung model. Three levels of effort were simulated. Each ventilator was evaluated at four PSV levels (5, 10, 15, and 20 cm H2O), with and without positive end-expiratory pressure (5 cm H2O). Trigger function was assessed as the time from effort onset to detectable pressurization. Pressurization capacity was evaluated using the airway pressure-time product computed as the net area under the pressure-time curve over the first 0.3 s after inspiratory effort onset. Expiratory resistance was evaluated by measuring trapped volume in controlled ventilation. Significant differences were found across the ventilators, with a range of triggering delays from 42 to 88 ms for all conditions averaged (P < 0.001). Under difficult conditions, the triggering delay was longer than 100 ms and the pressurization was poor for five ventilators at PSV5 and three at PSV10, suggesting an inability to unload patient's effort. On average, turbine-based ventilators performed better than conventional ventilators, which showed no improvement compared to a bench comparison in 2000. Technical performance of trigger function, pressurization capacity, and expiratory resistance differs considerably across new-generation ICU ventilators. ICU ventilators seem to have reached a technical ceiling in recent years, and some ventilators still perform inadequately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less
Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy
2012-11-01
Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.
Development and verification of local/global analysis techniques for laminated composites
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1989-01-01
Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.
Treatment Planning for Accelerator-Based Boron Neutron Capture Therapy
NASA Astrophysics Data System (ADS)
Herrera, María S.; González, Sara J.; Minsky, Daniel M.; Kreiner, Andrés J.
2010-08-01
Glioblastoma multiforme and metastatic melanoma are frequent brain tumors in adults and presently still incurable diseases. Boron Neutron Capture Therapy (BNCT) is a promising alternative for this kind of pathologies. Accelerators have been proposed for BNCT as a way to circumvent the problem of siting reactors in hospitals and for their relative simplicity and lower cost among other advantages. Considerable effort is going into the development of accelerator-based BNCT neutron sources in Argentina. Epithermal neutron beams will be produced through appropriate proton-induced nuclear reactions and optimized beam shaping assemblies. Using these sources, computational dose distributions were evaluated in a real patient with diagnosed glioblastoma treated with BNCT. The simulated irradiation was delivered in order to optimize dose to the tumors within the normal tissue constraints. Using Monte Carlo radiation transport calculations, dose distributions were generated for brain, skin and tumor. Also, the dosimetry was studied by computing cumulative dose-volume histograms for volumes of interest. The results suggest acceptable skin average dose and a significant dose delivered to tumor with low average whole brain dose for irradiation times less than 60 minutes, indicating a good performance of an accelerator-based BNCT treatment.
Hypersonic Shock/Boundary-Layer Interaction Database
NASA Technical Reports Server (NTRS)
Settles, G. S.; Dodson, L. J.
1991-01-01
Turbulence modeling is generally recognized as the major problem obstructing further advances in computational fluid dynamics (CFD). A closed solution of the governing Navier-Stokes equations for turbulent flows of practical consequence is still far beyond grasp. At the same time, the simplified models of turbulence which are used to achieve closure of the Navier-Stokes equations are known to be rigorously incorrect. While these models serve a definite purpose, they are inadequate for the general prediction of hypersonic viscous/inviscid interactions, mixing problems, chemical nonequilibria, and a range of other phenomena which must be predicted in order to design a hypersonic vehicle computationally. Due to the complexity of turbulence, useful new turbulence models are synthesized only when great expertise is brought to bear and considerable intellectual energy is expended. Although this process is fundamentally theoretical, crucial guidance may be gained from carefully-executed basic experiments. Following the birth of a new model, its testing and validation once again demand comparisons with data of unimpeachable quality. This report concerns these issues which arise from the experimental aspects of hypersonic modeling and represents the results of the first phase of an effort to develop compressible turbulence models.
Treatment Planning for Accelerator-Based Boron Neutron Capture Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrera, Maria S.; Gonzalez, Sara J.; Minsky, Daniel M.
2010-08-04
Glioblastoma multiforme and metastatic melanoma are frequent brain tumors in adults and presently still incurable diseases. Boron Neutron Capture Therapy (BNCT) is a promising alternative for this kind of pathologies. Accelerators have been proposed for BNCT as a way to circumvent the problem of siting reactors in hospitals and for their relative simplicity and lower cost among other advantages. Considerable effort is going into the development of accelerator-based BNCT neutron sources in Argentina. Epithermal neutron beams will be produced through appropriate proton-induced nuclear reactions and optimized beam shaping assemblies. Using these sources, computational dose distributions were evaluated in a realmore » patient with diagnosed glioblastoma treated with BNCT. The simulated irradiation was delivered in order to optimize dose to the tumors within the normal tissue constraints. Using Monte Carlo radiation transport calculations, dose distributions were generated for brain, skin and tumor. Also, the dosimetry was studied by computing cumulative dose-volume histograms for volumes of interest. The results suggest acceptable skin average dose and a significant dose delivered to tumor with low average whole brain dose for irradiation times less than 60 minutes, indicating a good performance of an accelerator-based BNCT treatment.« less
Determination of acoustical transfer functions using an impulse method
NASA Astrophysics Data System (ADS)
MacPherson, J.
1985-02-01
The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.
Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C. A.; Horner, Marc; Ku, Joy P.; Myers Jr., Jerry G.; Vadigepalli, Rajanikanth; Lytton, William W.
2018-01-01
Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations. PMID:29713272
Mulugeta, Lealem; Drach, Andrew; Erdemir, Ahmet; Hunt, C A; Horner, Marc; Ku, Joy P; Myers, Jerry G; Vadigepalli, Rajanikanth; Lytton, William W
2018-01-01
Modeling and simulation in computational neuroscience is currently a research enterprise to better understand neural systems. It is not yet directly applicable to the problems of patients with brain disease. To be used for clinical applications, there must not only be considerable progress in the field but also a concerted effort to use best practices in order to demonstrate model credibility to regulatory bodies, to clinics and hospitals, to doctors, and to patients. In doing this for neuroscience, we can learn lessons from long-standing practices in other areas of simulation (aircraft, computer chips), from software engineering, and from other biomedical disciplines. In this manuscript, we introduce some basic concepts that will be important in the development of credible clinical neuroscience models: reproducibility and replicability; verification and validation; model configuration; and procedures and processes for credible mechanistic multiscale modeling. We also discuss how garnering strong community involvement can promote model credibility. Finally, in addition to direct usage with patients, we note the potential for simulation usage in the area of Simulation-Based Medical Education, an area which to date has been primarily reliant on physical models (mannequins) and scenario-based simulations rather than on numerical simulations.
NASA Astrophysics Data System (ADS)
Bates, David; Pettitt, B. Montgomery; Buck, Gregory R.; Zechiedrich, Lynn
2016-09-01
In the Vologodskii review[19], the accompanying comments, and many other publications, there has been considerable effort to analyze the actions of type II topoisomerases, especially with regard to ;topological simplification; [4]. Whereas these efforts could be characterized as a battle of the models, with each research team arguing for their version of how it might work, each specific kinetic concept adds important considerations to the fundamental question of how these enzymes function. The basic tenet, however, of what is called the ;hooked juxtaposition model [1],; is not a modeling aspect, but is simply a geometric mathematical fact.
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi
2014-09-01
Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
Indicators for the automated analysis of drug prescribing quality.
Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A
1998-01-01
Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.
Basic and applied research related to the technology of space energy conversion systems
NASA Technical Reports Server (NTRS)
Hertzberg, A.; Mattick, A. T.; Bruckner, A. P.
1988-01-01
The first six months' research effort on the Liquid Droplet Radiator (LDR) focussed on experimental and theoretical studies of radiation by an LDR droplet cloud. Improvements in the diagnostics for the radiation facility have been made which have permitted an accurate experimental test of theoretical predictions of LDR radiation over a wide range of optical depths, using a cloud of Dow silicone oil droplets. In conjunction with these measurements an analysis was made of the evolution of the cylindrical droplet cloud generated by a 2300-hole orifice plate. This analysis indicates that a considerable degree of agglomeration of droplets occurs over the first meter of travel. Theoretical studies have centered on developments of an efficient means of computing the angular scattering distribution from droplets in an LDR droplet cloud, so that a parameter study can be carried out for LDR radiative performance vs fluid optical properties and cloud geometry.
CEMCAN Software Enhanced for Predicting the Properties of Woven Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.
2000-01-01
Major advancements are needed in current high-temperature materials to meet the requirements of future space and aeropropulsion structural components. Ceramic matrix composites (CMC's) are one class of materials that are being evaluated as candidate materials for many high-temperature applications. Past efforts to improve the performance of CMC's focused primarily on improving the properties of the fiber, interfacial coatings, and matrix constituents as individual phases. Design and analysis tools must take into consideration the complex geometries, microstructures, and fabrication processes involved in these composites and must allow the composite properties to be tailored for optimum performance. Major accomplishments during the past year include the development and inclusion of woven CMC micromechanics methodology into the CEMCAN (Ceramic Matrix Composites Analyzer) computer code. The code enables one to calibrate a consistent set of constituent properties as a function of temperature with the aid of experimentally measured data.
Thermal quantum time-correlation functions from classical-like dynamics
NASA Astrophysics Data System (ADS)
Hele, Timothy J. H.
2017-07-01
Thermal quantum time-correlation functions are of fundamental importance in quantum dynamics, allowing experimentally measurable properties such as reaction rates, diffusion constants and vibrational spectra to be computed from first principles. Since the exact quantum solution scales exponentially with system size, there has been considerable effort in formulating reliable linear-scaling methods involving exact quantum statistics and approximate quantum dynamics modelled with classical-like trajectories. Here, we review recent progress in the field with the development of methods including centroid molecular dynamics , ring polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). We show how these methods have recently been obtained from 'Matsubara dynamics', a form of semiclassical dynamics which conserves the quantum Boltzmann distribution. We also apply the Matsubara formalism to reaction rate theory, rederiving t → 0+ quantum transition-state theory (QTST) and showing that Matsubara-TST, like RPMD-TST, is equivalent to QTST. We end by surveying areas for future progress.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Rasti, Behnam; Heravi, Yeganeh Entezari
2018-06-01
Isoform diversity, critical physiological roles and involvement in major diseases/disorders such as glaucoma, epilepsy, Alzheimer's disease, obesity, and cancers have made carbonic anhydrase (CA), one of the most interesting case studies in the field of computer aided drug design. Since applying non-selective inhibitors can result in major side effects, there have been considerable efforts so far to achieve selective inhibitors for different isoforms of CA. Using proteochemometrics approach, the chemical interaction space governed by a group of 4-amino-substituted benzenesulfonamides and human CAs has been explored in the present study. Several validation methods have been utilized to assess the validity, robustness and predictivity power of the proposed proteochemometric model. Our model has offered major structural information that can be applied to design new selective inhibitors for distinct isoforms of CA. To prove the applicability of the proposed model, new compounds have been designed based on the offered discriminative structural features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sato, M.; Kamide, Y.; Richmond, A.D.
A new technique is presented to estimate electric fields and currents in a localized region of the high-latitude ionosphere by combining two magnetogram-inversion algorithms. This paper describes the concept and practical procedures of the method as well as the first results of our efforts in which this new scheme is applied to northern Scandinavia, computing the ionospheric parameters on a small scale. Examining latitudinal profiles of these parameters and precipitating particles, it is found that the region of the most intense precipitation in the morning sector is located equatorward of the region of the strongest electric field. To evaluate themore » relative importance of ionospheric and magnetospheric effects, the field-aligned current is divided into two components: (del Sigma) dot E and Sigma del dot E. These two components give often the opposite directions in the resultant field-aligned currents. The relative strength of the two components appears to vary considerably with latitude.« less
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
MODELING THE AMBIENT CONDITION EFFECTS OF AN AIR-COOLED NATURAL CIRCULATION SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Lisowski, Darius D.; Bucknor, Matthew
The Reactor Cavity Cooling System (RCCS) is a passive safety concept under consideration for the overall safety strategy of advanced reactors such as the High Temperature Gas-Cooled Reactor (HTGR). One such variant, air-cooled RCCS, uses natural convection to drive the flow of air from outside the reactor building to remove decay heat during normal operation and accident scenarios. The Natural convection Shutdown heat removal Test Facility (NSTF) at Argonne National Laboratory (“Argonne”) is a half-scale model of the primary features of one conceptual air-cooled RCCS design. The facility was constructed to carry out highly instrumented experiments to study the performancemore » of the RCCS concept for reactor decay heat removal that relies on natural convection cooling. Parallel modeling and simulation efforts were performed to support the design, operation, and analysis of the natural convection system. Throughout the testing program, strong influences of ambient conditions were observed in the experimental data when baseline tests were repeated under the same test procedures. Thus, significant analysis efforts were devoted to gaining a better understanding of these influences and the subsequent response of the NSTF to ambient conditions. It was determined that air humidity had negligible impacts on NSTF system performance and therefore did not warrant consideration in the models. However, temperature differences between the building exterior and interior air, along with the outside wind speed, were shown to be dominant factors. Combining the stack and wind effects together, an empirical model was developed based on theoretical considerations and using experimental data to correlate zero-power system flow rates with ambient meteorological conditions. Some coefficients in the model were obtained based on best fitting the experimental data. The predictive capability of the empirical model was demonstrated by applying it to the new set of experimental data. The empirical model was also implemented in the computational models of the NSTF using both RELAP5-3D and STARCCM+ codes. Accounting for the effects of ambient conditions, simulations from both codes predicted the natural circulation flow rates very well.« less
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2012-07-01 2012-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2013-07-01 2013-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2011-07-01 2011-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2014-07-01 2014-07-01 false How does the Secretary compute maintenance of effort in...
34 CFR 461.45 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Education (Continued) OFFICE OF VOCATIONAL AND ADULT EDUCATION, DEPARTMENT OF EDUCATION ADULT EDUCATION... awarded for the year after the year of the waiver by comparing the amount spent for adult education from... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in...
NASA Astrophysics Data System (ADS)
Chonacky, Norman; Winch, David
2008-04-01
There is substantial evidence of a need to make computation an integral part of the undergraduate physics curriculum. This need is consistent with data from surveys in both the academy and the workplace, and has been reinforced by two years of exploratory efforts by a group of physics faculty for whom computation is a special interest. We have examined past and current efforts at reform and a variety of strategic, organizational, and institutional issues involved in any attempt to broadly transform existing practice. We propose a set of guidelines for development based on this past work and discuss our vision of computationally integrated physics.
Micro-video display with ocular tracking and interactive voice control
NASA Technical Reports Server (NTRS)
Miller, James E.
1993-01-01
In certain space-restricted environments, many of the benefits resulting from computer technology have been foregone because of the size, weight, inconvenience, and lack of mobility associated with existing computer interface devices. Accordingly, an effort to develop a highly miniaturized and 'wearable' computer display and control interface device, referred to as the Sensory Integrated Data Interface (SIDI), is underway. The system incorporates a micro-video display that provides data display and ocular tracking on a lightweight headset. Software commands are implemented by conjunctive eye movement and voice commands of the operator. In this initial prototyping effort, various 'off-the-shelf' components have been integrated into a desktop computer and with a customized menu-tree software application to demonstrate feasibility and conceptual capabilities. When fully developed as a customized system, the interface device will allow mobile, 'hand-free' operation of portable computer equipment. It will thus allow integration of information technology applications into those restrictive environments, both military and industrial, that have not yet taken advantage of the computer revolution. This effort is Phase 1 of Small Business Innovative Research (SBIR) Topic number N90-331 sponsored by the Naval Undersea Warfare Center Division, Newport. The prime contractor is Foster-Miller, Inc. of Waltham, MA.
Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions
ERIC Educational Resources Information Center
Sessoms, John; Finney, Sara J.
2015-01-01
Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…
NASA Technical Reports Server (NTRS)
Vickers, John
2015-01-01
The Materials Genome Initiative (MGI) project element is a cross-Center effort that is focused on the integration of computational tools to simulate manufacturing processes and materials behavior. These computational simulations will be utilized to gain understanding of processes and materials behavior to accelerate process development and certification to more efficiently integrate new materials in existing NASA projects and to lead to the design of new materials for improved performance. This NASA effort looks to collaborate with efforts at other government agencies and universities working under the national MGI. MGI plans to develop integrated computational/experimental/ processing methodologies for accelerating discovery and insertion of materials to satisfy NASA's unique mission demands. The challenges include validated design tools that incorporate materials properties, processes, and design requirements; and materials process control to rapidly mature emerging manufacturing methods and develop certified manufacturing processes
Seventh-Grade Social Studies versus Social Meliorism
ERIC Educational Resources Information Center
Greiner, Jeff A.
2016-01-01
The Wake County Public School System (WCPSS), in the state of North Carolina, has gone through considerable recent effort to revise, support, and assess their seventh-grade social studies curriculum in an effort to serve three goals: comply with the Common Core State Standards (Common Core), comply with the North Carolina Essential Standards…
Systems Models and Programs for Higher Education. A Catalogue.
ERIC Educational Resources Information Center
Shoemaker, William A.
In recent years there has been considerable effort devoted to the development of systems models and programs that would assist college and university administrators in obtaining and analyzing data about internal operations. Such management data presumably would be helpful in decisionmaking. In this document an effort has been made to provide a…
The potential influence of rain on airfoil performance
NASA Technical Reports Server (NTRS)
Dunham, R. Earl, Jr.
1987-01-01
The potential influence of heavy rain on airfoil performance is discussed. Experimental methods for evaluating rain effects are reviewed. Important scaling considerations for extrapolating model data are presented. It is shown that considerable additional effort, both analytical and experimental, is necessary to understand the degree of hazard associated with flight operations in rain.
ERIC Educational Resources Information Center
Höhne, Jan Karem; Schlosser, Stephan; Krebs, Dagmar
2017-01-01
Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions,…
Stochastic evolutionary dynamics in minimum-effort coordination games
NASA Astrophysics Data System (ADS)
Li, Kun; Cong, Rui; Wang, Long
2016-08-01
The minimum-effort coordination game draws recently more attention for the fact that human behavior in this social dilemma is often inconsistent with the predictions of classical game theory. Here, we combine evolutionary game theory and coalescence theory to investigate this game in finite populations. Both analytic results and individual-based simulations show that effort costs play a key role in the evolution of contribution levels, which is in good agreement with those observed experimentally. Besides well-mixed populations, set structured populations have also been taken into consideration. Therein we find that large number of sets and moderate migration rate greatly promote effort levels, especially for high effort costs.
Peel, Sean; Bhatia, Satyajeet; Eggbeer, Dominic; Morris, Daniel S; Hayhurst, Caroline
2017-06-01
Previously published evidence has established major clinical benefits from using computer-aided design, computer-aided manufacturing, and additive manufacturing to produce patient-specific devices. These include cutting guides, drilling guides, positioning guides, and implants. However, custom devices produced using these methods are still not in routine use, particularly by the UK National Health Service. Oft-cited reasons for this slow uptake include the following: a higher up-front cost than conventionally fabricated devices, material-choice uncertainty, and a lack of long-term follow-up due to their relatively recent introduction. This article identifies a further gap in current knowledge - that of design rules, or key specification considerations for complex computer-aided design/computer-aided manufacturing/additive manufacturing devices. This research begins to address the gap by combining a detailed review of the literature with first-hand experience of interdisciplinary collaboration on five craniofacial patient case studies. In each patient case, bony lesions in the orbito-temporal region were segmented, excised, and reconstructed in the virtual environment. Three cases translated these digital plans into theatre via polymer surgical guides. Four cases utilised additive manufacturing to fabricate titanium implants. One implant was machined from polyether ether ketone. From the literature, articles with relevant abstracts were analysed to extract design considerations. In all, 19 frequently recurring design considerations were extracted from previous publications. Nine new design considerations were extracted from the case studies - on the basis of subjective clinical evaluation. These were synthesised to produce a design considerations framework to assist clinicians with prescribing and design engineers with modelling. Promising avenues for further research are proposed.
Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.
Theory and algorithms to compute Helfrich bending forces: a review.
Guckenberger, Achim; Gekle, Stephan
2017-05-24
Cell membranes are vital to shield a cell's interior from the environment. At the same time they determine to a large extent the cell's mechanical resistance to external forces. In recent years there has been considerable interest in the accurate computational modeling of such membranes, driven mainly by the amazing variety of shapes that red blood cells and model systems such as vesicles can assume in external flows. Given that the typical height of a membrane is only a few nanometers while the surface of the cell extends over many micrometers, physical modeling approaches mostly consider the interface as a two-dimensional elastic continuum. Here we review recent modeling efforts focusing on one of the computationally most intricate components, namely the membrane's bending resistance. We start with a short background on the most widely used bending model due to Helfrich. While the Helfrich bending energy by itself is an extremely simple model equation, the computation of the resulting forces is far from trivial. At the heart of these difficulties lies the fact that the forces involve second order derivatives of the local surface curvature which by itself is the second derivative of the membrane geometry. We systematically derive and compare the different routes to obtain bending forces from the Helfrich energy, namely the variational approach and the thin-shell theory. While both routes lead to mathematically identical expressions, so-called linear bending models are shown to reproduce only the leading order term while higher orders differ. The main part of the review contains a description of various computational strategies which we classify into three categories: the force, the strong and the weak formulation. We finally give some examples for the application of these strategies in actual simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Draxl, Caroline; Hopson, Thomas
Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less
State-of-the-art review of computational fluid dynamics modeling for fluid-solids systems
NASA Astrophysics Data System (ADS)
Lyczkowski, R. W.; Bouillard, J. X.; Ding, J.; Chang, S. L.; Burge, S. W.
1994-05-01
As the result of 15 years of research (50 staff years of effort) Argonne National Laboratory (ANL), through its involvement in fluidized-bed combustion, magnetohydrodynamics, and a variety of environmental programs, has produced extensive computational fluid dynamics (CFD) software and models to predict the multiphase hydrodynamic and reactive behavior of fluid-solids motions and interactions in complex fluidized-bed reactors (FBR's) and slurry systems. This has resulted in the FLUFIX, IRF, and SLUFIX computer programs. These programs are based on fluid-solids hydrodynamic models and can predict information important to the designer of atmospheric or pressurized bubbling and circulating FBR, fluid catalytic cracking (FCC) and slurry units to guarantee optimum efficiency with minimum release of pollutants into the environment. This latter issue will become of paramount importance with the enactment of the Clean Air Act Amendment (CAAA) of 1995. Solids motion is also the key to understanding erosion processes. Erosion rates in FBR's and pneumatic and slurry components are computed by ANL's EROSION code to predict the potential metal wastage of FBR walls, intervals, feed distributors, and cyclones. Only the FLUFIX and IRF codes will be reviewed in the paper together with highlights of the validations because of length limitations. It is envisioned that one day, these codes with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale, and biomass as energy sources; to retain energy security; and to remediate waste and ecological problems.
Summary of Pressure Gain Combustion Research at NASA
NASA Technical Reports Server (NTRS)
Perkins, H. Douglas; Paxson, Daniel E.
2018-01-01
NASA has undertaken a systematic exploration of many different facets of pressure gain combustion over the last 25 years in an effort to exploit the inherent thermodynamic advantage of pressure gain combustion over the constant pressure combustion process used in most aerospace propulsion systems. Applications as varied as small-scale UAV's, rotorcraft, subsonic transports, hypersonics and launch vehicles have been considered. In addition to studying pressure gain combustor concepts such as wave rotors, pulse detonation engines, pulsejets, and rotating detonation engines, NASA has studied inlets, nozzles, ejectors and turbines which must also process unsteady flow in an integrated propulsion system. Other design considerations such as acoustic signature, combustor material life and heat transfer that are unique to pressure gain combustors have also been addressed in NASA research projects. In addition to a wide range of experimental studies, a number of computer codes, from 0-D up through 3-D, have been developed or modified to specifically address the analysis of unsteady flow fields. Loss models have also been developed and incorporated into these codes that improve the accuracy of performance predictions and decrease computational time. These codes have been validated numerous times across a broad range of operating conditions, and it has been found that once validated for one particular pressure gain combustion configuration, these codes are readily adaptable to the others. All in all, the documentation of this work has encompassed approximately 170 NASA technical reports, conference papers and journal articles to date. These publications are very briefly summarized herein, providing a single point of reference for all of NASA's pressure gain combustion research efforts. This documentation does not include the significant contributions made by NASA research staff to the programs of other agencies, universities, industrial partners and professional society committees through serving as technical advisors, technical reviewers and research consultants.
Modelling soil erosion at European scale: towards harmonization and reproducibility
NASA Astrophysics Data System (ADS)
Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.
2015-02-01
Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Overview of Materials Qualification Needs for Metal Additive Manufacturing
NASA Astrophysics Data System (ADS)
Seifi, Mohsen; Salem, Ayman; Beuth, Jack; Harrysson, Ola; Lewandowski, John J.
2016-03-01
This overview highlights some of the key aspects regarding materials qualification needs across the additive manufacturing (AM) spectrum. AM technology has experienced considerable publicity and growth in the past few years with many successful insertions for non-mission-critical applications. However, to meet the full potential that AM has to offer, especially for flight-critical components (e.g., rotating parts, fracture-critical parts, etc.), qualification and certification efforts are necessary. While development of qualification standards will address some of these needs, this overview outlines some of the other key areas that will need to be considered in the qualification path, including various process-, microstructure-, and fracture-modeling activities in addition to integrating these with lifing activities targeting specific components. Ongoing work in the Advanced Manufacturing and Mechanical Reliability Center at Case Western Reserve University is focusing on fracture and fatigue testing to rapidly assess critical mechanical properties of some titanium alloys before and after post-processing, in addition to conducting nondestructive testing/evaluation using micro-computerized tomography at General Electric. Process mapping studies are being conducted at Carnegie Mellon University while large area microstructure characterization and informatics (EBSD and BSE) analyses are being conducted at Materials Resources LLC to enable future integration of these efforts via an Integrated Computational Materials Engineering approach to AM. Possible future pathways for materials qualification are provided.
An ecohydrological model of malaria outbreaks
NASA Astrophysics Data System (ADS)
Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.
2012-08-01
Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission driven by climatic time series. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear ecohydrological model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.
AOPs and Biomarkers: Bridging High Throughput Screening ...
As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will benefit from additional data sources that connect the magnitude of perturbation from the in vitro system to a level of concern at the organism or population level. The adverse outcome pathway (AOP) concept provides an ideal framework for combining these complementary data. Recent international efforts under the auspices of the Organization for Economic Co-operation and Development (OECD) have resulted in an AOP wiki designed to house formal descriptions of AOPs suitable for use in regulatory decision making. Recent efforts have built upon this to include an ontology describing the AOP with linkages to biological pathways, physiological terminology, and taxonomic applicability domains. Incorporation of an AOP network tool developed by the U.S. Army Corps of Engineers also allows consideration of cumulative risk from chemical and non-chemical stressors. Biomarkers are an important complement to formal AOP descriptions, particularly when dealing with susceptible subpopulations or lifestages in human health risk assessment. To address the issue of nonchemical stressors than may modify effects of criteria air pollutants, a novel method was used to integrate blood gene expression data with hema
Computer-assisted innovations in craniofacial surgery.
Rudman, Kelli; Hoekzema, Craig; Rhee, John
2011-08-01
Reconstructive surgery for complex craniofacial defects challenges even the most experienced surgeons. Preoperative reconstructive planning requires consideration of both functional and aesthetic properties of the mandible, orbit, and midface. Technological innovations allow for computer-assisted preoperative planning, computer-aided manufacturing of patient-specific implants (PSIs), and computer-assisted intraoperative navigation. Although many case reports discuss computer-assisted preoperative planning and creation of custom implants, a general overview of computer-assisted innovations is not readily available. This article reviews innovations in computer-assisted reconstructive surgery including anatomic considerations when using PSIs, technologies available for preoperative planning, work flow and process of obtaining a PSI, and implant materials available for PSIs. A case example follows illustrating the use of this technology in the reconstruction of an orbital-frontal-temporal defect with a PSI. Computer-assisted reconstruction of complex craniofacial defects provides the reconstructive surgeon with innovative options for challenging reconstructive cases. As technology advances, applications of computer-assisted reconstruction will continue to expand. © Thieme Medical Publishers.
34 CFR 403.185 - How does the Secretary compute maintenance of effort in the event of a waiver?
Code of Federal Regulations, 2010 CFR
2010-07-01
... VOCATIONAL AND APPLIED TECHNOLOGY EDUCATION PROGRAM What Financial Conditions Must Be Met by a State? § 403... 34 Education 3 2010-07-01 2010-07-01 false How does the Secretary compute maintenance of effort in the event of a waiver? 403.185 Section 403.185 Education Regulations of the Offices of the Department...
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
ERIC Educational Resources Information Center
Kolata, Gina
1984-01-01
Examines social influences which discourage women from pursuing studies in computer science, including monopoly of computer time by boys at the high school level, sexual harassment in college, movies, and computer games. Describes some initial efforts to encourage females of all ages to study computer science. (JM)
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
Combining Computational and Social Effort for Collaborative Problem Solving
Wagy, Mark D.; Bongard, Josh C.
2015-01-01
Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199
NASA Astrophysics Data System (ADS)
Lele, Sanjiva K.
2002-08-01
Funds were received in April 2001 under the Department of Defense DURIP program for construction of a 48 processor high performance computing cluster. This report details the hardware which was purchased and how it has been used to enable and enhance research activities directly supported by, and of interest to, the Air Force Office of Scientific Research and the Department of Defense. The report is divided into two major sections. The first section after this summary describes the computer cluster, its setup, and some cluster performance benchmark results. The second section explains ongoing research efforts which have benefited from the cluster hardware, and presents highlights of those efforts since installation of the cluster.
ERIC Educational Resources Information Center
Metcalf, Heather E.
2011-01-01
Considerable research, policy, and programmatic efforts have been dedicated to addressing the participation of particular populations in STEM for decades. Each of these efforts claims equity-related goals; yet, they heavily frame the problem, through pervasive STEM pipeline model discourse, in terms of national needs, workforce supply, and…
USDA-ARS?s Scientific Manuscript database
In the last several decades, there has been considerable effort to protect and restore wetlands throughout the USA. These efforts have required significant investment of both private and public funds. Accordingly, it has become important to document the effectiveness of this protection and restora...
Organizational Development: The Role of Communication in Diagnosis, Change, and Evaluation.
ERIC Educational Resources Information Center
Hain, Tony; Tubbs, Stewart L.
Three key considerations (What is organizational development (OD)? Why do organizations undertake OD efforts? What are the critical phases (and their pitfalls) that make up the OD effort?) are discussed in this paper. The sections include: "What is OD," which presents three definitions of OD and the goals of OD as identified by Bennis; "Why Do…
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Health, Education, and Human Services Div.
Currently, the extent of palliative care instruction varies considerably across and within the three major phases of the physician education and training process. This analysis of current educational efforts in palliative care is based on information obtained from a survey conducted of all United States medical schools, surveys conducted on United…
contributes to the research efforts for commercial buildings. This effort is dedicated to studying the , commercial sector whole-building energy simulation, scientific computing, and software configuration and
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-03
...] Guidances for Industry and Food and Drug Administration Staff: Computer-Assisted Detection Devices Applied... Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to... guidance, entitled ``Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device...
Study of the Use of Time-Mean Vortices to Generate Lift for MAV Applications
2011-05-31
microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters (geometry, frequency, amplitude of oscillation, etc...issue involved. Towards this end, a suspended microplate was fabricated via MEMS technology and driven to in-plane resonance via Lorentz force...force to drive the suspended MEMS-based microplate to in-plane resonance. Computational effort centers around optimization of a range of parameters
A General Approach to Measuring Test-Taking Effort on Computer-Based Tests
ERIC Educational Resources Information Center
Wise, Steven L.; Gao, Lingyun
2017-01-01
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Fechner, Hanna B; Schooler, Lael J; Pachur, Thorsten
2018-01-01
Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort-or cognitive costs-associated with a strategy be conceptualized and measured? We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought-Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded. Copyright © 2017 Elsevier B.V. All rights reserved.
Participation in multilateral effort to develop high performance integrated CPC evacuated collectors
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J. J.
1992-05-01
The University of Chicago Solar Energy Group has had a continuing program and commitment to develop an advanced evacuated solar collector integrating nonimaging concentration into its design. During the period from 1985-1987, some of our efforts were directed toward designing and prototyping a manufacturable version of an Integrated Compound Parabolic Concentrator (ICPC) evacuated collector tube as part of an international cooperative effort involving six organizations in four different countries. This 'multilateral' project made considerable progress towards a commercially practical collector. One of two basic designs considered employed a heat pipe and an internal metal reflector CPC. We fabricated and tested two large diameter (125 mm) borosilicate glass collector tubes to explore this concept. The other design also used a large diameter (125 mm) glass tube but with a specially configured internal shaped mirror CPC coupled to a U-tube absorber. Performance projections in a variety of systems applications using the computer design tools developed by the International Energy Agency (IEA) task on evacuated collectors were used to optimize the optical and thermal design. The long-term goal of this work continues to be the development of a high efficiency, low cost solar collector to supply solar thermal energy at temperatures up to 250 C. Some experience and perspectives based on our work are presented and reviewed. Despite substantial progress, the stability of research support and the market for commercial solar thermal collectors were such that the project could not be continued. A cooperative path involving university, government, and industrial collaboration remains the most attractive near term option for developing a commercial ICPC.
Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron
2016-10-01
The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bennett, Jerome (Technical Monitor)
2002-01-01
The NASA Center for Computational Sciences (NCCS) is a high-performance scientific computing facility operated, maintained and managed by the Earth and Space Data Computing Division (ESDCD) of NASA Goddard Space Flight Center's (GSFC) Earth Sciences Directorate. The mission of the NCCS is to advance leading-edge science by providing the best people, computers, and data storage systems to NASA's Earth and space sciences programs and those of other U.S. Government agencies, universities, and private institutions. Among the many computationally demanding Earth science research efforts supported by the NCCS in Fiscal Year 1999 (FY99) are the NASA Seasonal-to-Interannual Prediction Project, the NASA Search and Rescue Mission, Earth gravitational model development efforts, the National Weather Service's North American Observing System program, Data Assimilation Office studies, a NASA-sponsored project at the Center for Ocean-Land-Atmosphere Studies, a NASA-sponsored microgravity project conducted by researchers at the City University of New York and the University of Pennsylvania, the completion of a satellite-derived global climate data set, simulations of a new geodynamo model, and studies of Earth's torque. This document presents highlights of these research efforts and an overview of the NCCS, its facilities, and its people.
van den Berg, Yvonne H M; Gommans, Rob
2017-09-01
New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.
Hardware Considerations for Computer Based Education in the 1980's.
ERIC Educational Resources Information Center
Hirschbuhl, John J.
1980-01-01
In the future, computers will be needed to sift through the vast proliferation of available information. Among new developments in computer technology are the videodisc microcomputers and holography. Predictions for future developments include laser libraries for the visually handicapped and Computer Assisted Dialogue. (JN)
Computerizing the Accounting Curriculum.
ERIC Educational Resources Information Center
Nash, John F.; England, Thomas G.
1986-01-01
Discusses the use of computers in college accounting courses. Argues that the success of new efforts in using computers in teaching accounting is dependent upon increasing instructors' computer skills, and choosing appropriate hardware and software, including commercially available business software packages. (TW)
Computers in Schools: White Boys Only?
ERIC Educational Resources Information Center
Hammett, Roberta F.
1997-01-01
Discusses the role of computers in today's world and the construction of computer use attitudes, such as gender gaps. Suggests how schools might close the gaps. Includes a brief explanation about how facility with computers is important for women in their efforts to gain equitable treatment in all aspects of their lives. (PA)
Computers and Instruction: Implications of the Rising Tide of Criticism for Reading Education.
ERIC Educational Resources Information Center
Balajthy, Ernest
1988-01-01
Examines two major reasons that schools have adopted computers without careful prior examination and planning. Surveys a variety of criticisms targeted toward some aspects of computer-based instruction in reading in an effort to direct attention to the beneficial implications of computers in the classroom. (MS)
Preparing Future Secondary Computer Science Educators
ERIC Educational Resources Information Center
Ajwa, Iyad
2007-01-01
Although nearly every college offers a major in computer science, many computer science teachers at the secondary level have received little formal training. This paper presents details of a project that could make a significant contribution to national efforts to improve computer science education by combining teacher education and professional…
A model for the optimization of the detection and eradication of isolated gypsy moth colonies
Tiffany L. Bogich; Andrew Liebhold; Katriona Shea
2007-01-01
Biological invasions of pest species pose a threat to the stability of ecosystems, both natural and managed (Liebhold et al. 1995, Shogren and Tschirhart 2005). Considerable effort is expended by national and local governments on excluding alien species via detection and eradication of invading populations, but these efforts are not necessarily designed in the most...
P.J. Radtke; D.M. Walker; A.R. Weiskittel; J. Frank; J.W. Coulston; J.A. Westfall
2015-01-01
Forest mensurationists in the United States have expended considerable effort over the past century making detailed observations of treesâ dimensions. In recent decades efforts have focused increasingly on weights and physical properties. Work is underway to compile original measurements from past volume, taper, and weight or biomass studies for North American tree...
Computers for the Faculty: How on a Limited Budget.
ERIC Educational Resources Information Center
Arman, Hal; Kostoff, John
An informal investigation of the use of computers at Delta College (DC) in Michigan revealed reasonable use of computers by faculty in disciplines such as mathematics, business, and technology, but very limited use in the humanities and social sciences. In an effort to increase faculty computer usage, DC decided to make computers available to any…
Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming
Philip A. Araman
1990-01-01
This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...
Biomechanics of Head, Neck, and Chest Injury Prevention for Soldiers: Phase 2 and 3
2016-08-01
understanding of the biomechanics of the head and brain. Task 2.3 details the computational modeling efforts conducted to evaluate the response of the...section also details the progress made on the development of a testing apparatus to evaluate cervical spine implants in survivable loading scenarios...computational modeling efforts conducted to evaluate the response of the cervical spine and the effects of cervical arthrodesis and arthroplasty during
Limits on fundamental limits to computation.
Markov, Igor L
2014-08-14
An indispensable part of our personal and working lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the past fifty years. Such Moore scaling now requires ever-increasing efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and increase our understanding of integrated-circuit scaling, here I review fundamental limits to computation in the areas of manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, I recapitulate how some limits were circumvented, and compare loose and tight limits. Engineering difficulties encountered by emerging technologies may indicate yet unknown limits.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
Research considerations when studying disasters.
Cox, Catherine Wilson
2008-03-01
Nurses play an integral role during disasters because they are called upon more than any other health care professional during disaster response efforts; consequently, nurse researchers are interested in studying the issues that impact nurses in the aftermath of a disaster. This article offers research considerations for nurse scientists when developing proposals related to disaster research and identifies resources and possible funding sources for their projects.
Office workers' computer use patterns are associated with workplace stressors.
Eijckelhof, Belinda H W; Huysmans, Maaike A; Blatter, Birgitte M; Leider, Priscilla C; Johnson, Peter W; van Dieën, Jaap H; Dennerlein, Jack T; van der Beek, Allard J
2014-11-01
This field study examined associations between workplace stressors and office workers' computer use patterns. We collected keyboard and mouse activities of 93 office workers (68F, 25M) for approximately two work weeks. Linear regression analyses examined the associations between self-reported effort, reward, overcommitment, and perceived stress and software-recorded computer use duration, number of short and long computer breaks, and pace of input device usage. Daily duration of computer use was, on average, 30 min longer for workers with high compared to low levels of overcommitment and perceived stress. The number of short computer breaks (30 s-5 min long) was approximately 20% lower for those with high compared to low effort and for those with low compared to high reward. These outcomes support the hypothesis that office workers' computer use patterns vary across individuals with different levels of workplace stressors. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Medical Information Processing by Computer.
ERIC Educational Resources Information Center
Kleinmuntz, Benjamin
The use of the computer for medical information processing was introduced about a decade ago. Considerable inroads have now been made toward its applications to problems in medicine. Present uses of the computer, both as a computational and noncomputational device include the following: automated search of patients' files; on-line clinical data…
Why CBI? An Examination of the Case for Computer-Based Instruction.
ERIC Educational Resources Information Center
Dean, Peter M.
1977-01-01
Discussion of the use of computers in instruction includes the relationship of theory to practice, the interactive nature of computer instruction, an overview of the Keller Plan, cost considerations, strategy for use of computers in instruction and training, and a look at examination procedure. (RAO)
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Evidence-based pathology in its second decade: toward probabilistic cognitive computing.
Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R
2017-03-01
Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
[Computer eyeglasses--aspects of a confusing topic].
Huber-Spitzy, V; Janeba, E
1997-01-01
With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.
Nurse-computer performance. Considerations for the nurse administrator.
Mills, M E; Staggers, N
1994-11-01
Regulatory reporting requirements and economic pressures to create a unified healthcare database are leading to the development of a fully computerized patient record. Nursing staff members will be responsible increasingly for using this technology, yet little is known about the interaction effect of staff characteristics and computer screen design on on-line accuracy and speed. In examining these issues, new considerations are raised for nurse administrators interested in facilitating staff use of clinical information systems.
2010-07-01
Cloud computing , an emerging form of computing in which users have access to scalable, on-demand capabilities that are provided through Internet... cloud computing , (2) the information security implications of using cloud computing services in the Federal Government, and (3) federal guidance and...efforts to address information security when using cloud computing . The complete report is titled Information Security: Federal Guidance Needed to
Bethge, Anja; Schumacher, Udo; Wedemann, Gero
2015-10-01
Despite considerable research efforts, the process of metastasis formation is still a subject of intense discussion, and even established models differ considerably in basic details and in the conclusions drawn from them. Mathematical and computational models add a new perspective to the research as they can quantitatively investigate the processes of metastasis and the effects of treatment. However, existing models look at only one treatment option at a time. We enhanced a previously developed computer model (called CaTSiT) that enables quantitative comparison of different metastasis formation models with clinical and experimental data, to include the effects of chemotherapy, external beam radiation, radioimmunotherapy and radioembolization. CaTSiT is based on a discrete event simulation procedure. The growth of the primary tumor and its metastases is modeled by a piecewise-defined growth function that describes the growth behavior of the primary tumor and metastases during various time intervals. The piecewise-defined growth function is composed of analytical functions describing the growth behavior of the tumor based on characteristics of the tumor, such as dormancy, or the effects of various therapies. The spreading of malignant cells into the blood is modeled by intravasation events, which are generated according to a rate function. Further events in the model describe the behavior of the released malignant cells until the formation of a new metastasis. The model is published under the GNU General Public License version 3. To demonstrate the application of the computer model, a case of a patient with a hepatocellular carcinoma and multiple metastases in the liver was simulated. Besides the untreated case, different treatments were simulated at two time points: one directly after diagnosis of the primary tumor and the other several months later. Except for early applied radioimmunotherapy, no treatment strategy was able to eliminate all metastases. These results emphasize the importance of early diagnosis and of proceeding with treatment even if no clinically detectable metastases are present at the time of diagnosis of the primary tumor. CaTSiT could be a valuable tool for quantitative investigation of the process of tumor growth and metastasis formation, including the effects of various treatment options. Copyright © 2015 Elsevier Inc. All rights reserved.
Akhter, Nasrin; Shehu, Amarda
2018-01-19
Due to the essential role that the three-dimensional conformation of a protein plays in regulating interactions with molecular partners, wet and dry laboratories seek biologically-active conformations of a protein to decode its function. Computational approaches are gaining prominence due to the labor and cost demands of wet laboratory investigations. Template-free methods can now compute thousands of conformations known as decoys, but selecting native conformations from the generated decoys remains challenging. Repeatedly, research has shown that the protein energy functions whose minima are sought in the generation of decoys are unreliable indicators of nativeness. The prevalent approach ignores energy altogether and clusters decoys by conformational similarity. Complementary recent efforts design protein-specific scoring functions or train machine learning models on labeled decoys. In this paper, we show that an informative consideration of energy can be carried out under the energy landscape view. Specifically, we leverage local structures known as basins in the energy landscape probed by a template-free method. We propose and compare various strategies of basin-based decoy selection that we demonstrate are superior to clustering-based strategies. The presented results point to further directions of research for improving decoy selection, including the ability to properly consider the multiplicity of native conformations of proteins.
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2017-08-01
A lumped-parameter model for vanadium redox flow batteries, which use metallic current collectors, is extended into a one-dimensional model using the plug flow reactor principle. Thus, the commonly used simplification of a perfectly mixed cell is no longer required. The resistances of the cell components are derived in the in-plane and through-plane directions. The copper current collector is the only component with a significant in-plane conductance, which allows for a simplified electrical network. The division of a full-scale flow cell into 10 layers in the direction of fluid flow represents a reasonable compromise between computational effort and accuracy. Due to the variations in the state of charge and thus the open circuit voltage of the electrolyte, the currents in the individual layers vary considerably. Hence, there are situations, in which the first layer, directly at the electrolyte input, carries a multiple of the last layer's current. The conventional model overestimates the cell performance. In the worst-case scenario, the more accurate 20-layer model yields a discharge capacity 9.4% smaller than that computed with the conventional model. The conductive current collector effectively eliminates the high over-potentials in the last layers of the plug flow reactor models that have been reported previously.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Rohmer, Kai; Buschel, Wolfgang; Dachselt, Raimund; Grosch, Thorsten
2015-12-01
At present, photorealistic augmentation is not yet possible since the computational power of mobile devices is insufficient. Even streaming solutions from stationary PCs cause a latency that affects user interactions considerably. Therefore, we introduce a differential rendering method that allows for a consistent illumination of the inserted virtual objects on mobile devices, avoiding delays. The computation effort is shared between a stationary PC and the mobile devices to make use of the capacities available on both sides. The method is designed such that only a minimum amount of data has to be transferred asynchronously between the participants. This allows for an interactive illumination of virtual objects with a consistent appearance under both temporally and spatially varying real illumination conditions. To describe the complex near-field illumination in an indoor scenario, HDR video cameras are used to capture the illumination from multiple directions. In this way, sources of illumination can be considered that are not directly visible to the mobile device because of occlusions and the limited field of view. While our method focuses on Lambertian materials, we also provide some initial approaches to approximate non-diffuse virtual objects and thereby allow for a wider field of application at nearly the same cost.
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-03-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front-ranging from issues of generativity to the replication of human experimental findings-by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach. Copyright © 2014 Cognitive Science Society, Inc.
De Novo Protein Structure Prediction
NASA Astrophysics Data System (ADS)
Hung, Ling-Hong; Ngan, Shing-Chung; Samudrala, Ram
An unparalleled amount of sequence data is being made available from large-scale genome sequencing efforts. The data provide a shortcut to the determination of the function of a gene of interest, as long as there is an existing sequenced gene with similar sequence and of known function. This has spurred structural genomic initiatives with the goal of determining as many protein folds as possible (Brenner and Levitt, 2000; Burley, 2000; Brenner, 2001; Heinemann et al., 2001). The purpose of this is twofold: First, the structure of a gene product can often lead to direct inference of its function. Second, since the function of a protein is dependent on its structure, direct comparison of the structures of gene products can be more sensitive than the comparison of sequences of genes for detecting homology. Presently, structural determination by crystallography and NMR techniques is still slow and expensive in terms of manpower and resources, despite attempts to automate the processes. Computer structure prediction algorithms, while not providing the accuracy of the traditional techniques, are extremely quick and inexpensive and can provide useful low-resolution data for structure comparisons (Bonneau and Baker, 2001). Given the immense number of structures which the structural genomic projects are attempting to solve, there would be a considerable gain even if the computer structure prediction approach were applicable to a subset of proteins.
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Alves, Frauke; Dullin, Christian; Napp, Joanna; Missbach-Guentner, Jeannine; Jannasch, Katharina; Mathejczyk, Julia; Pardo, Luis A; Stühmer, Walter; Tietze, Lutz-F
2009-05-01
Conventional chemotherapy of cancer has its limitations, especially in advanced and disseminated disease and suffers from lack of specificity. This results in a poor therapeutic index and considerable toxicity to normal organs. Therefore, many efforts are made to develop novel therapeutic tools against cancer with the aim of selectively targeting the drug to the tumour site. Drug delivery strategies fundamentally rely on the identification of good-quality biomarkers, allowing unequivocal discrimination between cancer and healthy tissue. At present, antibodies or antibody fragments have clearly proven their value as carrier molecules specific for a tumour-associated molecular marker. This present review draws attention to the use of near-infrared fluorescence (NIRF) imaging to investigate binding specificity and kinetics of carrier molecules such as monoclonal antibodies. In addition, flat-panel volume computed tomography (fpVCT) will be presented to monitor anatomical structures in tumour mouse models over time in a non-invasive manner. Each imaging device sheds light on a different aspect; functional imaging is applied to optimise the dose schedule and the concept of selective tumour therapies, whereas anatomical imaging assesses preclinically the efficacy of novel tumour therapies. Both imaging techniques in combination allow the visualisation of functional information obtained by NIRF imaging within an adequate anatomic framework.
ATPP: A Pipeline for Automatic Tractography-Based Brain Parcellation
Li, Hai; Fan, Lingzhong; Zhuo, Junjie; Wang, Jiaojian; Zhang, Yu; Yang, Zhengyi; Jiang, Tianzi
2017-01-01
There is a longstanding effort to parcellate brain into areas based on micro-structural, macro-structural, or connectional features, forming various brain atlases. Among them, connectivity-based parcellation gains much emphasis, especially with the considerable progress of multimodal magnetic resonance imaging in the past two decades. The Brainnetome Atlas published recently is such an atlas that follows the framework of connectivity-based parcellation. However, in the construction of the atlas, the deluge of high resolution multimodal MRI data and time-consuming computation poses challenges and there is still short of publically available tools dedicated to parcellation. In this paper, we present an integrated open source pipeline (https://www.nitrc.org/projects/atpp), named Automatic Tractography-based Parcellation Pipeline (ATPP) to realize the framework of parcellation with automatic processing and massive parallel computing. ATPP is developed to have a powerful and flexible command line version, taking multiple regions of interest as input, as well as a user-friendly graphical user interface version for parcellating single region of interest. We demonstrate the two versions by parcellating two brain regions, left precentral gyrus and middle frontal gyrus, on two independent datasets. In addition, ATPP has been successfully utilized and fully validated in a variety of brain regions and the human Brainnetome Atlas, showing the capacity to greatly facilitate brain parcellation. PMID:28611620
A kriging metamodel-assisted robust optimization method based on a reverse model
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
Logistics in the Computer Lab.
ERIC Educational Resources Information Center
Cowles, Jim
1989-01-01
Discusses ways to provide good computer laboratory facilities for elementary and secondary schools. Topics discussed include establishing the computer lab and selecting hardware; types of software; physical layout of the room; printers; networking possibilities; considerations relating to the physical environment; and scheduling methods. (LRW)
Sinusitis: Special Considerations for Aging Patients
... Humanitarian Efforts International Outreach Advocacy Board of Governors Industry Programs Professional Development Home AcademyU Home Study Course Maintenance of Certification Conferences & Events Practice Management Home Resources ...
A Comprehensive Toolset for General-Purpose Private Computing and Outsourcing
2016-12-08
project and scientific advances made towards each of the research thrusts throughout the project duration. 1 Project Objectives Cloud computing enables...possibilities that the cloud enables is computation outsourcing, when the client can utilize any necessary computing resources for its computational task...Security considerations, however, stand on the way of harnessing the full benefits of cloud computing to the fullest extent and prevent clients from
Future Computer Requirements for Computational Aerodynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Recent advances in computational aerodynamics are discussed as well as motivations for and potential benefits of a National Aerodynamic Simulation Facility having the capability to solve fluid dynamic equations at speeds two to three orders of magnitude faster than presently possible with general computers. Two contracted efforts to define processor architectures for such a facility are summarized.
2010-06-01
DATES COVEREDAPR 2009 – JAN 2010 (From - To) APR 2009 – JAN 2010 4. TITLE AND SUBTITLE EMERGING NEUROMORPHIC COMPUTING ARCHITECTURES AND ENABLING...14. ABSTRACT The highly cross-disciplinary emerging field of neuromorphic computing architectures for cognitive information processing applications...belief systems, software, computer engineering, etc. In our effort to develop cognitive systems atop a neuromorphic computing architecture, we explored
Overview 1993: Computational applications
NASA Technical Reports Server (NTRS)
Benek, John A.
1993-01-01
Computational applications include projects that apply or develop computationally intensive computer programs. Such programs typically require supercomputers to obtain solutions in a timely fashion. This report describes two CSTAR projects involving Computational Fluid Dynamics (CFD) technology. The first, the Parallel Processing Initiative, is a joint development effort and the second, the Chimera Technology Development, is a transfer of government developed technology to American industry.
NASA Technical Reports Server (NTRS)
Pavlock, Kate Maureen
2013-01-01
Although the scope of flight test engineering efforts may vary among organizations, all point to a common theme: flight test engineering is an interdisciplinary effort to test an asset in its operational flight environment. Upfront planning where design, implementation, and test efforts are clearly aligned with the flight test objective are keys to success. This chapter provides a top level perspective of flight test engineering for the non-expert. Additional research and reading on the topic is encouraged to develop a deeper understanding of specific considerations involved in each phase of flight test engineering.
Computing at the speed limit (supercomputers)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhard, R.
1982-07-01
The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less
Computer Augmented Video Education.
ERIC Educational Resources Information Center
Sousa, M. B.
1979-01-01
Describes project CAVE (Computer Augmented Video Education), an ongoing effort at the U.S. Naval Academy to present lecture material on videocassette tape, reinforced by drill and practice through an interactive computer system supported by a 12 channel closed circuit television distribution and production facility. (RAO)
Computer Guided Instructional Design.
ERIC Educational Resources Information Center
Merrill, M. David; Wood, Larry E.
1984-01-01
Describes preliminary efforts to create the Lesson Design System, a computer-guided instructional design system written in Pascal for Apple microcomputers. Its content outline, strategy, display, and online lesson editors correspond roughly to instructional design phases of content and strategy analysis, display creation, and computer programing…
CAROLINA CENTER FOR COMPUTATIONAL TOXICOLOGY
The Center will advance the field of computational toxicology through the development of new methods and tools, as well as through collaborative efforts. In each Project, new computer-based models will be developed and published that represent the state-of-the-art. The tools p...
NASA Astrophysics Data System (ADS)
Boehm, R. F.
1985-09-01
A review of thermodynamic principles is given in an effort to see if these concepts may indicate possibilities for improvements in solar central receiver power plants. Aspects related to rate limitations in cycles, thermodynamic availability of solar radiation, and sink temperature considerations are noted. It appears that considerably higher instantaneous plant efficiencies are possible by raising the maximum temperature and lowering the minimum temperature of the cycles. Of course, many practical engineering problems will have to be solved to realize the promised benefits.
Phylotastic! Making tree-of-life knowledge accessible, reusable and convenient.
Stoltzfus, Arlin; Lapp, Hilmar; Matasci, Naim; Deus, Helena; Sidlauskas, Brian; Zmasek, Christian M; Vaidya, Gaurav; Pontelli, Enrico; Cranston, Karen; Vos, Rutger; Webb, Campbell O; Harmon, Luke J; Pirrung, Megan; O'Meara, Brian; Pennell, Matthew W; Mirarab, Siavash; Rosenberg, Michael S; Balhoff, James P; Bik, Holly M; Heath, Tracy A; Midford, Peter E; Brown, Joseph W; McTavish, Emily Jane; Sukumaran, Jeet; Westneat, Mark; Alfaro, Michael E; Steele, Aaron; Jordan, Greg
2013-05-13
Scientists rarely reuse expert knowledge of phylogeny, in spite of years of effort to assemble a great "Tree of Life" (ToL). A notable exception involves the use of Phylomatic, which provides tools to generate custom phylogenies from a large, pre-computed, expert phylogeny of plant taxa. This suggests great potential for a more generalized system that, starting with a query consisting of a list of any known species, would rectify non-standard names, identify expert phylogenies containing the implicated taxa, prune away unneeded parts, and supply branch lengths and annotations, resulting in a custom phylogeny suited to the user's needs. Such a system could become a sustainable community resource if implemented as a distributed system of loosely coupled parts that interact through clearly defined interfaces. With the aim of building such a "phylotastic" system, the NESCent Hackathons, Interoperability, Phylogenies (HIP) working group recruited 2 dozen scientist-programmers to a weeklong programming hackathon in June 2012. During the hackathon (and a three-month follow-up period), 5 teams produced designs, implementations, documentation, presentations, and tests including: (1) a generalized scheme for integrating components; (2) proof-of-concept pruners and controllers; (3) a meta-API for taxonomic name resolution services; (4) a system for storing, finding, and retrieving phylogenies using semantic web technologies for data exchange, storage, and querying; (5) an innovative new service, DateLife.org, which synthesizes pre-computed, time-calibrated phylogenies to assign ages to nodes; and (6) demonstration projects. These outcomes are accessible via a public code repository (GitHub.com), a website (http://www.phylotastic.org), and a server image. Approximately 9 person-months of effort (centered on a software development hackathon) resulted in the design and implementation of proof-of-concept software for 4 core phylotastic components, 3 controllers, and 3 end-user demonstration tools. While these products have substantial limitations, they suggest considerable potential for a distributed system that makes phylogenetic knowledge readily accessible in computable form. Widespread use of phylotastic systems will create an electronic marketplace for sharing phylogenetic knowledge that will spur innovation in other areas of the ToL enterprise, such as annotation of sources and methods and third-party methods of quality assessment.
10 CFR 473.30 - Standards and criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... efforts previously abandoned by private researchers unless there has been an intervening technological advance, promising conceptual innovation, or justified by other special consideration; (c) Would not be...
10 CFR 473.30 - Standards and criteria.
Code of Federal Regulations, 2011 CFR
2011-01-01
... efforts previously abandoned by private researchers unless there has been an intervening technological advance, promising conceptual innovation, or justified by other special consideration; (c) Would not be...
Computers as an Instrument for Data Analysis. Technical Report No. 11.
ERIC Educational Resources Information Center
Muller, Mervin E.
A review of statistical data analysis involving computers as a multi-dimensional problem provides the perspective for consideration of the use of computers in statistical analysis and the problems associated with large data files. An overall description of STATJOB, a particular system for doing statistical data analysis on a digital computer,…
NASA Astrophysics Data System (ADS)
Postpischl, L.; Morelli, A.; Danecek, P.
2009-04-01
Formats used to represent (and distribute) tomographic earth models differ considerably and are rarely self-consistent. In fact, each earth scientist, or research group, uses specific conventions to encode the various parameterizations used to describe, e.g., seismic wave speed or density in three dimensions, and complete information is often found in related documents or publications (if available at all) only. As a consequence, use of various tomographic models from different authors requires considerable effort, is more cumbersome than it should be and prevents widespread exchange and circulation within the community. We propose a format, based on modern web standards, able to represent different (grid-based) model parameterizations within the same simple text-based environment, easy to write, to parse, and to visualise. The aim is the creation of self-describing data-structures, both human and machine readable, that are automatically recognised by general-purpose software agents, and easily imported in the scientific programming environment. We think that the adoption of such a representation as a standard for the exchange and distribution of earth models can greatly ease their usage and enhance their circulation, both among fellow seismologists and among a broader non-specialist community. The proposed solution uses semantic web technologies, fully fitting the current trends in data accessibility. It is based on Json (JavaScript Object Notation), a plain-text, human-readable lightweight computer data interchange format, which adopts a hierarchical name-value model for representing simple data structures and associative arrays (called objects). Our implementation allows integration of large datasets with metadata (authors, affiliations, bibliographic references, units of measure etc.) into a single resource. It is equally suited to represent other geo-referenced volumetric quantities — beyond tomographic models — as well as (structured and unstructured) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.
Human Factors Considerations in System Design
NASA Technical Reports Server (NTRS)
Mitchell, C. M. (Editor); Vanbalen, P. M. (Editor); Moe, K. L. (Editor)
1983-01-01
Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments.
Teaching Computer Applications.
ERIC Educational Resources Information Center
Lundgren, Carol A.; And Others
This document, which is designed to provide classroom teachers at all levels with practical ideas for a computer applications course, examines curricular considerations, teaching strategies, delivery techniques, and assessment methods applicable to a course focusing on applications of computers in business. The guide is divided into three…
Notebook Computers Increase Communication.
ERIC Educational Resources Information Center
Carey, Doris M.; Sale, Paul
1994-01-01
Project FIT (Full Inclusion through Technology) provides notebook computers for children with severe disabilities. The computers offer many input and output options. Assessing the students' equipment needs is a complex process, requiring determination of communication goals and baseline abilities, and consideration of equipment features such as…
Langley's Computational Efforts in Sonic-Boom Softening of the Boeing HSCT
NASA Technical Reports Server (NTRS)
Fouladi, Kamran
1999-01-01
NASA Langley's computational efforts in the sonic-boom softening of the Boeing high-speed civil transport are discussed in this paper. In these efforts, an optimization process using a higher order Euler method for analysis was employed to reduce the sonic boom of a baseline configuration through fuselage camber and wing dihedral modifications. Fuselage modifications did not provide any improvements, but the dihedral modifications were shown to be an important tool for the softening process. The study also included aerodynamic and sonic-boom analyses of the baseline and some of the proposed "softened" configurations. Comparisons of two Euler methodologies and two propagation programs for sonic-boom predictions are also discussed in the present paper.
Efficient Reverse-Engineering of a Developmental Gene Regulatory Network
Cicin-Sain, Damjan; Ashyraliyev, Maksat; Jaeger, Johannes
2012-01-01
Understanding the complex regulatory networks underlying development and evolution of multi-cellular organisms is a major problem in biology. Computational models can be used as tools to extract the regulatory structure and dynamics of such networks from gene expression data. This approach is called reverse engineering. It has been successfully applied to many gene networks in various biological systems. However, to reconstitute the structure and non-linear dynamics of a developmental gene network in its spatial context remains a considerable challenge. Here, we address this challenge using a case study: the gap gene network involved in segment determination during early development of Drosophila melanogaster. A major problem for reverse-engineering pattern-forming networks is the significant amount of time and effort required to acquire and quantify spatial gene expression data. We have developed a simplified data processing pipeline that considerably increases the throughput of the method, but results in data of reduced accuracy compared to those previously used for gap gene network inference. We demonstrate that we can infer the correct network structure using our reduced data set, and investigate minimal data requirements for successful reverse engineering. Our results show that timing and position of expression domain boundaries are the crucial features for determining regulatory network structure from data, while it is less important to precisely measure expression levels. Based on this, we define minimal data requirements for gap gene network inference. Our results demonstrate the feasibility of reverse-engineering with much reduced experimental effort. This enables more widespread use of the method in different developmental contexts and organisms. Such systematic application of data-driven models to real-world networks has enormous potential. Only the quantitative investigation of a large number of developmental gene regulatory networks will allow us to discover whether there are rules or regularities governing development and evolution of complex multi-cellular organisms. PMID:22807664
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics
NASA Astrophysics Data System (ADS)
Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.
2017-12-01
The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as other rheologies.
Overview of NASA/OAST efforts related to manufacturing technology
NASA Technical Reports Server (NTRS)
Saunders, N. T.
1976-01-01
An overview of some of NASA's current efforts related to manufacturing technology and some possible directions for the future are presented. The topics discussed are: computer-aided design, composite structures, and turbine engine components.
Data Network Weather Service Reporting - Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Frey
2012-08-30
A final report is made of a three-year effort to develop a new forecasting paradigm for computer network performance. This effort was made in co-ordination with Fermi Lab's construction of e-Weather Center.
Defining Computational Thinking for Mathematics and Science Classrooms
NASA Astrophysics Data System (ADS)
Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri
2016-02-01
Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics.
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Implementing Equal Access Computer Labs.
ERIC Educational Resources Information Center
Clinton, Janeen; And Others
This paper discusses the philosophy followed in Palm Beach County to adapt computer literacy curriculum, hardware, and software to meet the needs of all children. The Department of Exceptional Student Education and the Department of Instructional Computing Services cooperated in planning strategies and coordinating efforts to implement equal…
The Effort Paradox: Effort Is Both Costly and Valued.
Inzlicht, Michael; Shenhav, Amitai; Olivola, Christopher Y
2018-04-01
According to prominent models in cognitive psychology, neuroscience, and economics, effort (be it physical or mental) is costly: when given a choice, humans and non-human animals alike tend to avoid effort. Here, we suggest that the opposite is also true and review extensive evidence that effort can also add value. Not only can the same outcomes be more rewarding if we apply more (not less) effort, sometimes we select options precisely because they require effort. Given the increasing recognition of effort's role in motivation, cognitive control, and value-based decision-making, considering this neglected side of effort will not only improve formal computational models, but also provide clues about how to promote sustained mental effort across time. Copyright © 2018 Elsevier Ltd. All rights reserved.
76 FR 28443 - President's National Security Telecommunications Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-17
... Government's use of cloud computing; the Federal Emergency Management Agency's NS/EP communications... Commercial Satellite Mission Assurance; and the way forward for the committee's cloud computing effort. The...
System Award for developing a tool that has had a lasting influence on computing. Project Jupyter evolved lasting influence on computing. Project Jupyter evolved from IPython, an effort pioneered by Fernando PÃ
Range contraction in large pelagic predators
Worm, Boris; Tittensor, Derek P.
2011-01-01
Large reductions in the abundance of exploited land predators have led to significant range contractions for those species. This pattern can be formalized as the range–abundance relationship, a general macroecological pattern that has important implications for the conservation of threatened species. Here we ask whether similar responses may have occurred in highly mobile pelagic predators, specifically 13 species of tuna and billfish. We analyzed two multidecadal global data sets on the spatial distribution of catches and fishing effort targeting these species and compared these with available abundance time series from stock assessments. We calculated the effort needed to reliably detect the presence of a species and then computed observed range sizes in each decade from 1960 to 2000. Results suggest significant range contractions in 9 of the 13 species considered here (between 2% and 46% loss of observed range) and significant range expansions in two species (11–29% increase). Species that have undergone the largest declines in abundance and are of particular conservation concern tended to show the largest range contractions. These include all three species of bluefin tuna and several marlin species. In contrast, skipjack tuna, which may have increased its abundance in the Pacific, has also expanded its range size. These results mirror patterns described for many land predators, despite considerable differences in habitat, mobility, and dispersal, and imply ecological extirpation of heavily exploited species across parts of their range. PMID:21693644
Quantifying the Risk Posed by Potential Earth Impacts
NASA Astrophysics Data System (ADS)
Chesley, Steven R.; Chodas, Paul W.; Milani, Andrea; Valsecchi, Giovanni B.; Yeomans, Donald K.
2002-10-01
Predictions of future potential Earth impacts by near-Earth objects (NEOs) have become commonplace in recent years, and the rate of these detections is likely to accelerate as asteroid survey efforts continue to mature. In order to conveniently compare and categorize the numerous potential impact solutions being discovered we propose a new hazard scale that will describe the risk posed by a particular potential impact in both absolute and relative terms. To this end, we measure each event in two ways, first without any consideration of the event's time proximity or its significance relative to the so-called background threat, and then in the context of the expected risk from other objects over the intervening years until the impact. This approach is designed principally to facilitate communication among astronomers, and it is not intended for public communication of impact risks. The scale characterizes impacts across all impact energies, probabilities and dates, and it is useful, in particular, when dealing with those cases which fall below the threshold of public interest. The scale also reflects the urgency of the situation in a natural way and thus can guide specialists in assessing the computational and observational effort appropriate for a given situation. In this paper we describe the metrics introduced, and we give numerous examples of their application. This enables us to establish in rough terms the levels at which events become interesting to various parties.
The influence of inspiratory effort and emphysema on pulmonary nodule volumetry reproducibility.
Moser, J B; Mak, S M; McNulty, W H; Padley, S; Nair, A; Shah, P L; Devaraj, A
2017-11-01
To evaluate the impact of inspiratory effort and emphysema on reproducibility of pulmonary nodule volumetry. Eighty-eight nodules in 24 patients with emphysema were studied retrospectively. All patients had undergone volumetric inspiratory and end-expiratory thoracic computed tomography (CT) for consideration of bronchoscopic lung volume reduction. Inspiratory and expiratory nodule volumes were measured using commercially available software. Local emphysema extent was established by analysing a segmentation area extended circumferentially around each nodule (quantified as percent of lung with density of -950 HU or less). Lung volumes were established using the same software. Differences in inspiratory and expiratory nodule volumes were illustrated using the Bland-Altman test. The influences of percentage reduction in lung volume at expiration, local emphysema extent, and nodule size on nodule volume variability were tested with multiple linear regression. The majority of nodules (59/88 [67%]) showed an increased volume at expiration. Mean difference in nodule volume between expiration and inspiration was +7.5% (95% confidence interval: -24.1, 39.1%). No relationships were demonstrated between nodule volume variability and emphysema extent, degree of expiration, or nodule size. Expiration causes a modest increase in volumetry-derived nodule volumes; however, the effect is unpredictable. Local emphysema extent had no significant effect on volume variability in the present cohort. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Non-LTE modeling for the National Ignition Facility (and beyond)
NASA Astrophysics Data System (ADS)
Scott, H. A.; Hammel, B. A.; Hansen, S. B.
2012-05-01
Considerable progress has been made in the last year in the study of laser-driven inertial confinement fusion at the National Ignition Facility (NIF). Experiments have demonstrated symmetric capsule implosions with plasma conditions approaching those required for ignition. Improvements in computational models - in large part due to advances in non-LTE modeling - have resulted in simulations that match experimental results quite well for the X-ray drive, implosion symmetry and total wall emission [1]. Non-LTE modeling is a key part of the NIF simulation effort, affecting several aspects of experimental design and diagnostics. The X-rays that drive the capsule arise from high-Z material ablated off the hohlraum wall. Current capsule designs avoid excessive preheat from high-energy X-rays by shielding the fuel with a mid-Z dopant, which affects the capsule dynamics. The dopant also mixes into the hot spot through hydrodynamic instabilities, providing diagnostic possibilities but potentially impacting the energy balance of the capsule [2]. Looking beyond the NIF, a proposed design for a fusion reactor chamber depends on lowdensity high-Z gas absorbing X-rays and particles to protect the first wall [3]. These situations encompass a large range of temperatures, densities and spatial scales. They each emphasize different aspects of atomic physics and present a variety of challenges for non-LTE modeling. We discuss the relevant issues and summarize the current state of the modeling effort for these applications.
The Human-Computer Interface and Information Literacy: Some Basics and Beyond.
ERIC Educational Resources Information Center
Church, Gary M.
1999-01-01
Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…
Hirayama, Denise; Saron, Clodoaldo
2015-06-01
Polymeric materials constitute a considerable fraction of waste computer equipment and polymers acrylonitrile-butadiene-styrene and high-impact polystyrene are the main thermoplastic polymeric components found in waste computer equipment. Identification, separation and characterisation of additives present in acrylonitrile-butadiene-styrene and high-impact polystyrene are fundamental procedures to mechanical recycling of these polymers. The aim of this study was to evaluate the methods for identification of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment in Brazil, as well as their potential for mechanical recycling. The imprecise utilisation of symbols for identification of the polymers and the presence of additives containing toxic elements in determinate computer devices are some of the difficulties found for recycling of acrylonitrile-butadiene-styrene and high-impact polystyrene from waste computer equipment. However, the considerable performance of mechanical properties of the recycled acrylonitrile-butadiene-styrene and high-impact polystyrene when compared with the virgin materials confirms the potential for mechanical recycling of these polymers. © The Author(s) 2015.
Computational complexity of the landscape II-Cosmological considerations
NASA Astrophysics Data System (ADS)
Denef, Frederik; Douglas, Michael R.; Greene, Brian; Zukowski, Claire
2018-05-01
We propose a new approach for multiverse analysis based on computational complexity, which leads to a new family of "computational" measure factors. By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it. We argue for a principle of "limited computational complexity" governing early universe dynamics as simulated by this supercomputer, which translates to a global measure for regulating the infinities of eternal inflation. The rules for time evolution can be thought of as a search algorithm, whose details should be constrained by a stronger principle of "minimal computational complexity". Unlike previously studied global measures, ours avoids standard equilibrium considerations and the well-known problems of Boltzmann Brains and the youngness paradox. We also give various definitions of the computational complexity of a cosmology, and argue that there are only a few natural complexity classes.
Numerical simulations of merging black holes for gravitational-wave astronomy
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey
2014-03-01
Gravitational waves from merging binary black holes (BBHs) are among the most promising sources for current and future gravitational-wave detectors. Accurate models of these waves are necessary to maximize the number of detections and our knowledge of the waves' sources; near the time of merger, the waves can only be computed using numerical-relativity simulations. For optimal application to gravitational-wave astronomy, BBH simulations must achieve sufficient accuracy and length, and all relevant regions of the BBH parameter space must be covered. While great progress toward these goals has been made in the almost nine years since BBH simulations became possible, considerable challenges remain. In this talk, I will discuss current efforts to meet these challenges, and I will present recent BBH simulations produced using the Spectral Einstein Code, including a catalog of publicly available gravitational waveforms [black-holes.org/waveforms]. I will also discuss simulations of merging black holes with high mass ratios and with spins nearly as fast as possible, the most challenging regions of the BBH parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potchen, E.J.; Harris, G.I.; Gift, D.A. Reinhard, D.K.
Results are reported of the final phase of the study effort generally titled Evaluative Studies in Nuclear Medicine Research. The previous work is reviewed and extended to an assessment providing perspectives on medical applications of positron emission tomographic (PET) systems, their technological context, and the related economic and marketing environment. Methodologies developed and used in earlier phases of the study were continued, but specifically extended to include solicitation of opinion from commercial organizations deemed to be potential developers, manufacturers and marketers of PET systems. Several factors which influence the demand for clinical uses of PET are evaluated and discussed. Themore » recent Federal funding of applied research with PET systems is found to be a necessary and encouraging event toward a determination that PET either is a powerful research tool limited to research, or whether it also presents major clinical utility. A comprehensive, updated bibliography of current literature related to the development, applications and economic considerations of PET technology is appended.« less
Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.
Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán
2018-05-23
Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.
NASA Astrophysics Data System (ADS)
Craney, Chris; Mazzeo, April; Lord, Kaye
1996-07-01
During the past five years the nation's concern for science education has expanded from a discussion about the future supply of Ph.D. scientists and its impact on the nation's scientific competitiveness to the broader consideration of the science education available to all students. Efforts to improve science education have led many authors to suggest greater collaboration between high school science teachers and their college/university colleagues. This article reviews the experience and outcomes of the Teachers + Occidental = Partnership in Science (TOPS) van program operating in the Los Angeles Metropolitan area. The program emphasizes an extensive ongoing staff development, responsiveness to teachers' concerns, technical and on-site support, and sustained interaction between participants and program staff. Access to modern technology, including computer-driven instruments and commercial data analysis software, coupled with increased teacher content knowledge has led to empowerment of teachers and changes in student interest in science. Results of student and teacher questionnaires are reviewed.
NASA Technical Reports Server (NTRS)
Bates, Harry
1990-01-01
A number of optical communication lines are now in use at the Kennedy Space Center (KSC) for the transmission of voice, computer data, and video signals. Presently, all of these channels utilize a single carrier wavelength centered near 1300 nm. The theoretical bandwidth of the fiber far exceeds the utilized capacity. Yet, practical considerations limit the usable bandwidth. The fibers have the capability of transmitting a multiplicity of signals simultaneously in each of two separate bands (1300 and 1550 nm). Thus, in principle, the number of transmission channels can be increased without installing new cable if some means of wavelength division multiplexing (WDM) can be utilized. The main goal of these experiments was to demonstrate that a factor of 2 increase in bandwidth utilization can share the same fiber in both a unidirectional configuration and a bidirectional mode of operation. Both signal and multimode fiber are installed at KSC. The great majority is multimode; therefore, this effort concentrated on multimode systems.
Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
NASA Astrophysics Data System (ADS)
Hontani, Hidekata; Higuchi, Yuya
In this article, we propose a vehicle positioning method that can estimate positions of cars even in areas where the GPS is not available. For the estimation, each car measures the relative distance to a car running in front, communicates the measurements with other cars, and uses the received measurements for estimating its position. In order to estimate the position even if the measurements are received with time-delay, we employed the time-delay tolerant Kalman filtering. For sharing the measurements, it is assumed that a car-to-car communication system is used. Then, the measurements sent from farther cars are received with larger time-delay. It follows that the accuracy of the estimates of farther cars become worse. Hence, the proposed method manages only the states of nearby cars to reduce computing effort. The authors simulated the proposed filtering method and found that the proposed method estimates the positions of nearby cars as accurate as the distributed Kalman filtering.
Engineering Software Suite Validates System Design
NASA Technical Reports Server (NTRS)
2007-01-01
EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.
Bell, Andrew S; Bradley, Joseph; Everett, Jeremy R; Knight, Michelle; Loesel, Jens; Mathias, John; McLoughlin, David; Mills, James; Sharp, Robert E; Williams, Christine; Wood, Terence P
2013-05-01
The screening files of many large companies, including Pfizer, have grown considerably due to internal chemistry efforts, company mergers and acquisitions, external contracted synthesis, or compound purchase schemes. In order to screen the targets of interest in a cost-effective fashion, we devised an easy-to-assemble, plate-based diversity subset (PBDS) that represents almost the entire computed chemical space of the screening file whilst comprising only a fraction of the plates in the collection. In order to create this file, we developed new design principles for the quality assessment of screening plates: the Rule of 40 (Ro40) and a plate selection process that insured excellent coverage of both library chemistry and legacy chemistry space. This paper describes the rationale, design, construction, and performance of the PBDS, that has evolved into the standard paradigm for singleton (one compound per well) high-throughput screening in Pfizer since its introduction in 2006.
Model reduction of multiscale chemical langevin equations: a numerical case study.
Sotiropoulos, Vassilios; Contou-Carrere, Marie-Nathalie; Daoutidis, Prodromos; Kaznessis, Yiannis N
2009-01-01
Two very important characteristics of biological reaction networks need to be considered carefully when modeling these systems. First, models must account for the inherent probabilistic nature of systems far from the thermodynamic limit. Often, biological systems cannot be modeled with traditional continuous-deterministic models. Second, models must take into consideration the disparate spectrum of time scales observed in biological phenomena, such as slow transcription events and fast dimerization reactions. In the last decade, significant efforts have been expended on the development of stochastic chemical kinetics models to capture the dynamics of biomolecular systems, and on the development of robust multiscale algorithms, able to handle stiffness. In this paper, the focus is on the dynamics of reaction sets governed by stiff chemical Langevin equations, i.e., stiff stochastic differential equations. These are particularly challenging systems to model, requiring prohibitively small integration step sizes. We describe and illustrate the application of a semianalytical reduction framework for chemical Langevin equations that results in significant gains in computational cost.
Plasmonic hot carrier dynamics in solid-state and chemical systems for energy conversion
Narang, Prineha; Sundararaman, Ravishankar; Atwater, Harry A.
2016-06-11
Surface plasmons provide a pathway to efficiently absorb and confine light in metallic nanostructures, thereby bridging photonics to the nano scale. The decay of surface plasmons generates energetic ‘hot’ carriers, which can drive chemical reactions or be injected into semiconductors for nano-scale photochemical or photovoltaic energy conversion. Novel plasmonic hot carrier devices and architectures continue to be demonstrated, but the complexity of the underlying processes make a complete microscopic understanding of all the mechanisms and design considerations for such devices extremely challenging.Here,we review the theoretical and computational efforts to understand and model plasmonic hot carrier devices.We split the problem intomore » three steps: hot carrier generation, transport and collection, and review theoretical approaches with the appropriate level of detail for each step along with their predictions. As a result, we identify the key advances necessary to complete the microscopic mechanistic picture and facilitate the design of the next generation of devices and materials for plasmonic energy conversion.« less
Efficient tomography of a quantum many-body system
NASA Astrophysics Data System (ADS)
Lanyon, B. P.; Maier, C.; Holzäpfel, M.; Baumgratz, T.; Hempel, C.; Jurcevic, P.; Dhand, I.; Buyskikh, A. S.; Daley, A. J.; Cramer, M.; Plenio, M. B.; Blatt, R.; Roos, C. F.
2017-12-01
Quantum state tomography is the standard technique for estimating the quantum state of small systems. But its application to larger systems soon becomes impractical as the required resources scale exponentially with the size. Therefore, considerable effort is dedicated to the development of new characterization tools for quantum many-body states. Here we demonstrate matrix product state tomography, which is theoretically proven to allow for the efficient and accurate estimation of a broad class of quantum states. We use this technique to reconstruct the dynamical state of a trapped-ion quantum simulator comprising up to 14 entangled and individually controlled spins: a size far beyond the practical limits of quantum state tomography. Our results reveal the dynamical growth of entanglement and describe its complexity as correlations spread out during a quench: a necessary condition for future demonstrations of better-than-classical performance. Matrix product state tomography should therefore find widespread use in the study of large quantum many-body systems and the benchmarking and verification of quantum simulators and computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, John M.; Coffin, Peter; Robbins, Brian A.
Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less
NASA Astrophysics Data System (ADS)
Englund, C. E.; Reeves, D. L.; Shingledecker, C. A.; Thorne, D. R.; Wilson, K. P.
1987-02-01
The Unified Tri-Service Cognitive Performance Assessment Battery (UTC-PAB) represents the primary metric for a Level 2 evaluation of cognitive performance in the JWGD3 MILPERF chemical defense biomedical drug screening program. Emphasis for UTC-PAB development has been on the standardization of test batteries across participating laboratories with respect to content, computer-based administration, test scoring, and data formatting. This effort has produced a 25-test UTC-PAB that represents the consolidation and unification of independent developments by the Tri-service membership. Test selection was based on established test validity and relevance of military performance. Sensitivity to effects of hostile environments and sustained operations were also considerations involved in test selection. Information processing, decision making, perception, and mental workload capacity are among the processes and abilities addressed in the battery. The UTC-PAB represents a dynamic approach to battery development. The nature of the biomedical drugs screened and information from performance centered task analyses will direct the form of future versions of the battery.
Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock
2018-01-01
Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.
GMPLS-based control plane for optical networks: early implementation experience
NASA Astrophysics Data System (ADS)
Liu, Hang; Pendarakis, Dimitrios; Komaee, Nooshin; Saha, Debanjan
2002-07-01
Generalized Multi-Protocol Label Switching (GMPLS) extends MPLS signaling and Internet routing protocols to provide a scalable, interoperable, distributed control plane, which is applicable to multiple network technologies such as optical cross connects (OXCs), photonic switches, IP routers, ATM switches, SONET and DWDM systems. It is intended to facilitate automatic service provisioning and dynamic neighbor and topology discovery across multi-vendor intelligent transport networks, as well as their clients. Efforts to standardize such a distributed common control plane have reached various stages in several bodies such as the IETF, ITU and OIF. This paper describes the design considerations and architecture of a GMPLS-based control plane that we have prototyped for core optical networks. Functional components of GMPLS signaling and routing are integrated in this architecture with an application layer controller module. Various requirements including bandwidth, network protection and survivability, traffic engineering, optimal utilization of network resources, and etc. are taken into consideration during path computation and provisioning. Initial experiments with our prototype demonstrate the feasibility and main benefits of GMPLS as a distributed control plane for core optical networks. In addition to such feasibility results, actual adoption and deployment of GMPLS as a common control plane for intelligent transport networks will depend on the successful completion of relevant standardization activities, extensive interoperability testing as well as the strengthening of appropriate business drivers.
Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky
2012-01-01
We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.
A neuronal model of a global workspace in effortful cognitive tasks.
Dehaene, S; Kerszberg, M; Changeux, J P
1998-11-24
A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.
A specific role for serotonin in overcoming effort cost.
Meyniel, Florent; Goodwin, Guy M; Deakin, Jf William; Klinge, Corinna; MacFadyen, Christine; Milligan, Holly; Mullings, Emma; Pessiglione, Mathias; Gaillard, Raphaël
2016-11-08
Serotonin is implicated in many aspects of behavioral regulation. Theoretical attempts to unify the multiple roles assigned to serotonin proposed that it regulates the impact of costs, such as delay or punishment, on action selection. Here, we show that serotonin also regulates other types of action costs such as effort. We compared behavioral performance in 58 healthy humans treated during 8 weeks with either placebo or the selective serotonin reuptake inhibitor escitalopram. The task involved trading handgrip force production against monetary benefits. Participants in the escitalopram group produced more effort and thereby achieved a higher payoff. Crucially, our computational analysis showed that this effect was underpinned by a specific reduction of effort cost, and not by any change in the weight of monetary incentives. This specific computational effect sheds new light on the physiological role of serotonin in behavioral regulation and on the clinical effect of drugs for depression. ISRCTN75872983.
DARPA-funded efforts in the development of novel brain-computer interface technologies.
Miranda, Robbin A; Casebeer, William D; Hein, Amy M; Judy, Jack W; Krotkov, Eric P; Laabs, Tracy L; Manzo, Justin E; Pankratz, Kent G; Pratt, Gill A; Sanchez, Justin C; Weber, Douglas J; Wheeler, Tracey L; Ling, Geoffrey S F
2015-04-15
The Defense Advanced Research Projects Agency (DARPA) has funded innovative scientific research and technology developments in the field of brain-computer interfaces (BCI) since the 1970s. This review highlights some of DARPA's major advances in the field of BCI, particularly those made in recent years. Two broad categories of DARPA programs are presented with respect to the ultimate goals of supporting the nation's warfighters: (1) BCI efforts aimed at restoring neural and/or behavioral function, and (2) BCI efforts aimed at improving human training and performance. The programs discussed are synergistic and complementary to one another, and, moreover, promote interdisciplinary collaborations among researchers, engineers, and clinicians. Finally, this review includes a summary of some of the remaining challenges for the field of BCI, as well as the goals of new DARPA efforts in this domain. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Selig, Judith A.; And Others
This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…
ERIC Educational Resources Information Center
Gates, Ann Quiroz; Hug, Sarah; Thiry, Heather; Alo, Richard; Beheshti, Mohsen; Fernandez, John; Rodriguez, Nestor; Adjouadi, Malek
2011-01-01
Hispanics have the highest growth rates among all groups in the U.S., yet they remain considerably underrepresented in computing careers and in the numbers who obtain advanced degrees. Hispanics constituted about 7% of undergraduate computer science and computer engineering graduates and 1% of doctoral graduates in 2007-2008. The small number of…
Computer Technology and Social Issues.
ERIC Educational Resources Information Center
Garson, G. David
Computing involves social issues and political choices. Issues such as privacy, computer crime, gender inequity, disemployment, and electronic democracy versus "Big Brother" are addressed in the context of efforts to develop a national public policy for information technology. A broad range of research and case studies are examined in an…
Advances in computational design and analysis of airbreathing propulsion systems
NASA Technical Reports Server (NTRS)
Klineberg, John M.
1989-01-01
The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.
Switching from Computer to Microcomputer Architecture Education
ERIC Educational Resources Information Center
Bolanakis, Dimosthenis E.; Kotsis, Konstantinos T.; Laopoulos, Theodore
2010-01-01
In the last decades, the technological and scientific evolution of the computing discipline has been widely affecting research in software engineering education, which nowadays advocates more enlightened and liberal ideas. This article reviews cross-disciplinary research on a computer architecture class in consideration of its switching to…
Renovating, Building, Expanding...Trying to Catch Up.
ERIC Educational Resources Information Center
Educational Record, 1989
1989-01-01
A collection of captioned photographs illustrates the range of campus construction needs, projects, and considerations in current efforts to catch up with the results of deferred maintenance and improvement. (MSE)
Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen
2006-01-01
The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.
Promoting pro-environmental action in climate change deniers
NASA Astrophysics Data System (ADS)
Bain, Paul G.; Hornsey, Matthew J.; Bongiorno, Renata; Jeffries, Carla
2012-08-01
A sizeable (and growing) proportion of the public in Western democracies deny the existence of anthropogenic climate change. It is commonly assumed that convincing deniers that climate change is real is necessary for them to act pro-environmentally. However, the likelihood of `conversion' using scientific evidence is limited because these attitudes increasingly reflect ideological positions. An alternative approach is to identify outcomes of mitigation efforts that deniers find important. People have strong interests in the welfare of their society, so deniers may act in ways supporting mitigation efforts where they believe these efforts will have positive societal effects. In Study 1, climate change deniers (N=155) intended to act more pro-environmentally where they thought climate change action would create a society where people are more considerate and caring, and where there is greater economic/technological development. Study 2 (N=347) replicated this experimentally, showing that framing climate change action as increasing consideration for others, or improving economic/technological development, led to greater pro-environmental action intentions than a frame emphasizing avoiding the risks of climate change. To motivate deniers' pro-environmental actions, communication should focus on how mitigation efforts can promote a better society, rather than focusing on the reality of climate change and averting its risks.
2006-05-25
This places it somewhere between the Structuralist and Dependency schools. International Political Economy theory balances between the Marxist and...does not portend to provide that explanation, but rather seeks to provide considerations for study in developmental theory and for United States...of globalization. Historic and contemporary developmental theories are insufficient in that they fail to account for societal characteristics in
Synthetic Vision Systems - Operational Considerations Simulation Experiment
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-01-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
Synthetic vision systems: operational considerations simulation experiment
NASA Astrophysics Data System (ADS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-04-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
Competition between protein folding and aggregation: A three-dimensional lattice-model simulation
NASA Astrophysics Data System (ADS)
Bratko, D.; Blanch, H. W.
2001-01-01
Aggregation of protein molecules resulting in the loss of biological activity and the formation of insoluble deposits represents a serious problem for the biotechnology and pharmaceutical industries and in medicine. Considerable experimental and theoretical efforts are being made in order to improve our understanding of, and ability to control, the process. In the present work, we describe a Monte Carlo study of a multichain system of coarse-grained model proteins akin to lattice models developed for simulations of protein folding. The model is designed to examine the competition between intramolecular interactions leading to the native protein structure, and intermolecular association, resulting in the formation of aggregates of misfolded chains. Interactions between the segments are described by a variation of the Go potential [N. Go and H. Abe, Biopolymers 20, 1013 (1981)] that extends the recognition between attracting types of segments to pairs on distinct chains. For the particular model we adopt, the global free energy minimum of a pair of protein molecules corresponds to a dimer of native proteins. When three or more molecules interact, clusters of misfolded chains can be more stable than aggregates of native folds. A considerable fraction of native structure, however, is preserved in these cases. Rates of conformational changes rapidly decrease with the size of the protein cluster. Within the timescale accessible to computer simulations, the folding-aggregation balance is strongly affected by kinetic considerations. Both the native form and aggregates can persist in metastable states, even if conditions such as temperature or concentration favor a transition to an alternative form. Refolding yield can be affected by the presence of an additional polymer species mimicking the function of a molecular chaperone.
Experimental Evaluation and Workload Characterization for High-Performance Computer Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.
1995-01-01
This research is conducted in the context of the Joint NSF/NASA Initiative on Evaluation (JNNIE). JNNIE is an inter-agency research program that goes beyond typical.bencbking to provide and in-depth evaluations and understanding of the factors that limit the scalability of high-performance computing systems. Many NSF and NASA centers have participated in the effort. Our research effort was an integral part of implementing JNNIE in the NASA ESS grand challenge applications context. Our research work under this program was composed of three distinct, but related activities. They include the evaluation of NASA ESS high- performance computing testbeds using the wavelet decomposition application; evaluation of NASA ESS testbeds using astrophysical simulation applications; and developing an experimental model for workload characterization for understanding workload requirements. In this report, we provide a summary of findings that covers all three parts, a list of the publications that resulted from this effort, and three appendices with the details of each of the studies using a key publication developed under the respective work.
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
Topical perspective on massive threading and parallelism.
Farber, Robert M
2011-09-01
Unquestionably computer architectures have undergone a recent and noteworthy paradigm shift that now delivers multi- and many-core systems with tens to many thousands of concurrent hardware processing elements per workstation or supercomputer node. GPGPU (General Purpose Graphics Processor Unit) technology in particular has attracted significant attention as new software development capabilities, namely CUDA (Compute Unified Device Architecture) and OpenCL™, have made it possible for students as well as small and large research organizations to achieve excellent speedup for many applications over more conventional computing architectures. The current scientific literature reflects this shift with numerous examples of GPGPU applications that have achieved one, two, and in some special cases, three-orders of magnitude increased computational performance through the use of massive threading to exploit parallelism. Multi-core architectures are also evolving quickly to exploit both massive-threading and massive-parallelism such as the 1.3 million threads Blue Waters supercomputer. The challenge confronting scientists in planning future experimental and theoretical research efforts--be they individual efforts with one computer or collaborative efforts proposing to use the largest supercomputers in the world is how to capitalize on these new massively threaded computational architectures--especially as not all computational problems will scale to massive parallelism. In particular, the costs associated with restructuring software (and potentially redesigning algorithms) to exploit the parallelism of these multi- and many-threaded machines must be considered along with application scalability and lifespan. This perspective is an overview of the current state of threading and parallelize with some insight into the future. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna H Oxstrand; Katya L Le Blanc
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less
Near-Source Modeling Updates: Building Downwash & Near-Road
The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...
SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any SAMSAN algorithm; however, it is generally agreed by experienced users, and in the numerical error analysis literature, that computation with non-symmetric matrices of order greater than about 200 should be avoided or treated with extreme care. SAMSAN attempts to support the needs of application oriented analysis by providing: 1) a methodology with unlimited growth potential, 2) a methodology to insure that associated documentation is current and available "on demand", 3) a foundation of basic computational algorithms that most controls analysis procedures are based upon, 4) a set of check out and evaluation programs which demonstrate usage of the algorithms on a series of problems which are structured to expose the limits of each algorithm's applicability, and 5) capabilities which support both a priori and a posteriori error analysis for the computational algorithms provided. The SAMSAN algorithms are coded in FORTRAN 77 for batch or interactive execution and have been implemented on a DEC VAX computer under VMS 4.7. An effort was made to assure that the FORTRAN source code was portable and thus SAMSAN may be adaptable to other machine environments. The documentation is included on the distribution tape or can be purchased separately at the price below. SAMSAN version 2.0 was developed in 1982 and updated to version 3.0 in 1988.
76 FR 17424 - President's National Security Telecommunications Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-29
... discuss and vote on the Communications Resiliency Report and receive an update on the cloud computing... Communications Resiliency Report III. Update on the Cloud Computing Scoping Effort IV. Closing Remarks Dated...
Conjugate Gradient Algorithms For Manipulator Simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1991-01-01
Report discusses applicability of conjugate-gradient algorithms to computation of forward dynamics of robotic manipulators. Rapid computation of forward dynamics essential to teleoperation and other advanced robotic applications. Part of continuing effort to find algorithms meeting requirements for increased computational efficiency and speed. Method used for iterative solution of systems of linear equations.
Using Information Technology in Mathematics Education.
ERIC Educational Resources Information Center
Tooke, D. James, Ed.; Henderson, Norma, Ed.
This collection of essays examines the history and impact of computers in mathematics and mathematics education from the early, computer-assisted instruction efforts through LOGO, the constructivist educational software for K-9 schools developed in the 1980s, to MAPLE, the computer algebra system for mathematical problem solving developed in the…
Cooperation Support in Computer-Aided Authoring and Learning.
ERIC Educational Resources Information Center
Muhlhauser, Max; Rudebusch, Tom
This paper discusses the use of Computer Supported Cooperative Work (CSCW) techniques for computer-aided learning (CAL); the work was started in the context of project Nestor, a joint effort of German universities about cooperative multimedia authoring/learning environments. There are four major categories of cooperation for CAL: author/author,…
2016 Annual Report - Argonne Leadership Computing Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Jim; Papka, Michael E.; Cerny, Beth A.
The Argonne Leadership Computing Facility (ALCF) helps researchers solve some of the world’s largest and most complex problems, while also advancing the nation’s efforts to develop future exascale computing systems. This report presents some of the ALCF’s notable achievements in key strategic areas over the past year.
Computer-Based Training: Capitalizing on Lessons Learned
ERIC Educational Resources Information Center
Bedwell, Wendy L.; Salas, Eduardo
2010-01-01
Computer-based training (CBT) is a methodology for providing systematic, structured learning; a useful tool when properly designed. CBT has seen a resurgence given the serious games movement, which is at the forefront of integrating primarily entertainment computer-based games into education and training. This effort represents a multidisciplinary…
"Computer Science Can Feed a Lot of Dreams"
ERIC Educational Resources Information Center
Educational Horizons, 2014
2014-01-01
Pat Yongpradit is the director of education at Code.org. He leads all education efforts, including professional development and curriculum creation, and he builds relationships with school districts. Pat joined "Educational Horizons" to talk about why it is important to teach computer science--even for non-computer science teachers. This…
ERIC Educational Resources Information Center
Oblinger, Diana
The Internet is an international network linking hundreds of smaller computer networks in North America, Europe, and Asia. Using the Internet, computer users can connect to a variety of computers with little effort or expense. The potential for use by college faculty is enormous. The largest problem faced by most users is understanding what such…
"Intelligent" Computer Assisted Instruction (CAI) Applications. Interim Report.
ERIC Educational Resources Information Center
Brown, John Seely; And Others
Interim work is documented describing efforts to modify computer techniques used to recognize and process English language requests to an instructional simulator. The conversion from a hand-coded to a table driven technique are described in detail. Other modifications to a simulation based computer assisted instruction program to allow a gaming…
NASA Astrophysics Data System (ADS)
Foo, Kam Keong
A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Pas, Elise T; Bradshaw, Catherine P
2012-10-01
Although there is an established literature supporting the efficacy of a variety of prevention programs, there has been less empirical work on the translation of such research to everyday practice or when scaled-up state-wide. There is a considerable need for more research on factors that enhance implementation of programs and optimize outcomes, particularly in school settings. The current paper examines how the implementation fidelity of an increasingly popular and widely disseminated prevention model called, School-wide Positive Behavioral Interventions and Supports (SW-PBIS), relates to student outcomes within the context of a state-wide scale-up effort. Data come from a scale-up effort of SW-PBIS in Maryland; the sample included 421 elementary and middle schools trained in SW-PBIS. SW-PBIS fidelity, as measured by one of three fidelity measures, was found to be associated with higher math achievement, higher reading achievement, and lower truancy. School contextual factors were related to implementation levels and outcomes. Implications for scale-up efforts of behavioral and mental health interventions and measurement considerations are discussed.
Examining the Association Between Implementation and Outcomes
Pas, Elise T.; Bradshaw, Catherine P.
2012-01-01
Although there is an established literature supporting the efficacy of a variety of prevention programs, there has been less empirical work on the tran of such research to everyday practice or when scaled-up state-wide. There is a considerable need for more research on factors that enhance implementation of programs and optimize outcomes, particularly in school settings. The current paper examines how the implementation fidelity of an increasingly popular and widely disseminated prevention model called, School-wide Positive Behavioral Interventions and Supports (SW-PBIS), relates to student outcomes within the context of a state-wide scale-up effort. Data come from a scale-up effort of SW-PBIS in Maryland; the sample included 421 elementary and middle schools trained in SW-PBIS. SW-PBIS fidelity, as measured by one of three fidelity measures, was found to be associated with higher math achievement, higher reading achievement, and lower truancy. School contextual factors were related to implementation levels and outcomes. Implications for scale-up efforts of behavioral and mental health interventions and measurement considerations are discussed. PMID:22836758
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Leary, Patrick
The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time ofmore » the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.« less
Understanding and preventing military suicide.
Bryan, Craig J; Jennings, Keith W; Jobes, David A; Bradley, John C
2012-01-01
The continual rise in the U.S. military's suicide rate since 2004 is one of the most vexing issues currently facing military leaders, mental health professionals, and suicide experts. Despite considerable efforts to address this problem, however, suicide rates have not decreased. The authors consider possible reasons for this frustrating reality, and question common assumptions and approaches to military suicide prevention. They further argue that suicide prevention efforts that more explicitly embrace the military culture and implement evidence-based strategies across the full spectrum of prevention and treatment could improve success. Several recommendations for augmenting current efforts to prevent military suicide are proposed.
The Sounding of the Sirens: Computer Contexts for Writing at the Two-Year College.
ERIC Educational Resources Information Center
Resnick, Paul; Strasma, Kip
1996-01-01
Argues for the importance of a critical view of computers in which educators give careful consideration to the computer's good or bad effects on research and teaching. Asks how the educators can best integrate hardware, software, and the Internet into their professional lives. (TB)
Some Measurement and Instruction Related Considerations Regarding Computer Assisted Testing.
ERIC Educational Resources Information Center
Oosterhof, Albert C.; Salisbury, David F.
The Assessment Resource Center (ARC) at Florida State University provides computer assisted testing (CAT) for approximately 4,000 students each term. Computer capabilities permit a small proctoring staff to administer tests simultaneously to large numbers of students. Programs provide immediate feedback for students and generate a variety of…
ERIC Educational Resources Information Center
Goldsborough, Reid
2009-01-01
It has been said that a computer lets a person make more mistakes faster than any other invention in human history, with the possible exceptions of handguns and tequila. Computers also make mistakes on their own, whether they're glitches, conflicts, bugs, crashes, or failures. Avoiding glitches is considerably less frustrating than trying to fix…
Learner Assessment Methods Using a Computer Based Interactive Videodisc System.
ERIC Educational Resources Information Center
Ehrlich, Lisa R.
This paper focuses on item design considerations faced by instructional designers and evaluators when using computer videodisc delivery systems as a means of assessing learner comprehension and competencies. Media characteristics of various interactive computer/videodisc training systems are briefly discussed as well as reasons for using such…
A Large Scale Computer Terminal Output Controller.
ERIC Educational Resources Information Center
Tucker, Paul Thomas
This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…
Teaching of Real Numbers by Using the Archimedes-Cantor Approach and Computer Algebra Systems
ERIC Educational Resources Information Center
Vorob'ev, Evgenii M.
2015-01-01
Computer technologies and especially computer algebra systems (CAS) allow students to overcome some of the difficulties they encounter in the study of real numbers. The teaching of calculus can be considerably more effective with the use of CAS provided the didactics of the discipline makes it possible to reveal the full computational potential of…
On finite element methods for the Helmholtz equation
NASA Technical Reports Server (NTRS)
Aziz, A. K.; Werschulz, A. G.
1979-01-01
The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.
Computational mechanics needs study
NASA Technical Reports Server (NTRS)
Griffin, O. Hayden, Jr.
1993-01-01
In order to assess the needs in computational mechanics over the next decade, we formulated a questionnaire and contacted computational mechanics researchers and users in industry, government, and academia. As expected, we found a wide variety of computational mechanics usage and research. This report outlines the activity discussed with those contacts, as well as that in our own organizations. It should be noted that most of the contacts were made before the recent decline of the defense industry. Therefore, areas which are strongly defense-oriented may decrease in relative importance. In order to facilitate updating of this study, names of a few key researchers in each area are included as starting points for future literature surveys. These lists of names are not intended to represent those persons doing the best research in that area, nor are they intended to be comprehensive. They are, as previously stated, offered as starting points for future literature searches. Overall, there is currently a broad activity in computational mechanics in this country, with the breadth and depth increasing as more sophisticated software and faster computers become more available. The needs and desires of the workers in this field are as diverse as their background and organizational products. There seems to be some degree of software development in any organization (although the level of activity is highly variable from one organization to another) which has any research component in its mission. It seems, however, that there is considerable use of commercial software in almost all organizations. In most industrial research organizations, it appears that very little actual software development is contracted out, but that most is done in-house, using a mixture of funding sources. Government agencies vary widely in the ratio of in-house to out-house ratio. There is a considerable amount of experimental verification in most, but not all, organizations. Generally, the amount of experimental verification is more than we expected. Of all the survey contacts, one or two believe that the resources they are allocated are sufficient, but most do not. Some believe they have only half the resources they need. Some see their resource deficits as short-term, while others see it as a trend which will continue or perhaps worsen. The pessimism is stronger in the defense and aerospace industry. When considering only the nonlinear development efforts, there appears to be an even mix of geometric and material nonlinearity. There is not much particular emphasis in linear analysis unless it is in extension of current analysis capabilities to larger problems. The primary exception is concern about modeling of composites, where proven methodologies have trailed element and computer hardware development. Most of the people we spoke to use finite element techniques, but there is some finite difference and boundary element work ongoing. There is also some interest in multiple methods. Coupling of finite elements and boundary elements appears to be of high interest, since the two analysis types are complementary.
Molgenis-impute: imputation pipeline in a box.
Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A
2015-08-19
Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.
Current research efforts with Bacillus thuringiensis
Normand R. Dubois
1991-01-01
The bioassay of 260 strains of Bacillus thuringiensis (Bt) and 70 commercial preparations show that regression coefficient estimates may be as critical as LC5O estimates when evaluating them for future consideration.
Utilization of communication technology by patients enrolled in substance abuse treatment
McClure, Erin A.; Acquavita, Shauna; Harding, Emily; Stitzer, Maxine
2012-01-01
Background Technology-based applications represent a promising method for providing efficacious, widely available interventions to substance abuse treatment patients. However, limited access to communication technology (i.e., mobile phones, computers, internet, and e-mail) could significantly impact the feasibility of these efforts, and little is known regarding technology utilization in substance abusing populations. Methods A survey was conducted to characterize utilization of communication technology in 266 urban, substance abuse treatment patients enrolled at eight drug-free, psychosocial or opioid-replacement therapy clinics. Results Survey participants averaged 41 years of age and 57% had a yearly household income of less than $15,000. The vast majority reported access to a mobile phone (91%), and to SMS text messaging (79%). Keeping a consistent mobile phone number and yearly mobile contract was higher for White participants, and also for those with higher education, and enrolled in drug-free, psychosocial treatment. Internet, e-mail, and computer use was much lower (39–45%), with younger age, higher education and income predicting greater use. No such differences existed for the use of mobile phones however. Conclusions Concern regarding the digital divide for marginalized populations appears to be disappearing with respect to mobile phones, but still exists for computer, internet, and e-mail access and use. Results suggest that mobile phone and texting applications may be feasibly applied for use in program-client interactions in substance abuse treatment. Careful consideration should be given to frequent phone number changes, access to technology, and motivation to engage with communication technology for treatment purposes. PMID:23107600
Utilization of communication technology by patients enrolled in substance abuse treatment.
McClure, Erin A; Acquavita, Shauna P; Harding, Emily; Stitzer, Maxine L
2013-04-01
Technology-based applications represent a promising method for providing efficacious, widely available interventions to substance abuse treatment patients. However, limited access to communication technology (i.e., mobile phones, computers, internet, and e-mail) could significantly impact the feasibility of these efforts, and little is known regarding technology utilization in substance abusing populations. A survey was conducted to characterize utilization of communication technology in 266 urban, substance abuse treatment patients enrolled at eight drug-free, psychosocial or opioid-replacement therapy clinics. Survey participants averaged 41 years of age and 57% had a yearly household income of less than $15,000. The vast majority reported access to a mobile phone (91%), and to SMS text messaging (79%). Keeping a consistent mobile phone number and yearly mobile contract was higher for White participants, and also for those with higher education, and enrolled in drug-free, psychosocial treatment. Internet, e-mail, and computer use was much lower (39-45%), with younger age, higher education and income predicting greater use. No such differences existed for the use of mobile phones however. Concern regarding the digital divide for marginalized populations appears to be disappearing with respect to mobile phones, but still exists for computer, internet, and e-mail access and use. Results suggest that mobile phone and texting applications may be feasibly applied for use in program-client interactions in substance abuse treatment. Careful consideration should be given to frequent phone number changes, access to technology, and motivation to engage with communication technology for treatment purposes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-12-31
Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less
Distributed multitasking ITS with PVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, W.C.; Halbleib, J.A. Sr.
1995-02-01
Advances of computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources, and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generated inmore » a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of MCNP on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electron/photon transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load balancing schemes for homogeneous and heterogeneous networks.« less
Modeling the Dynamics of Disease States in Depression
Demic, Selver; Cheng, Sen
2014-01-01
Major depressive disorder (MDD) is a common and costly disorder associated with considerable morbidity, disability, and risk for suicide. The disorder is clinically and etiologically heterogeneous. Despite intense research efforts, the response rates of antidepressant treatments are relatively low and the etiology and progression of MDD remain poorly understood. Here we use computational modeling to advance our understanding of MDD. First, we propose a systematic and comprehensive definition of disease states, which is based on a type of mathematical model called a finite-state machine. Second, we propose a dynamical systems model for the progression, or dynamics, of MDD. The model is abstract and combines several major factors (mechanisms) that influence the dynamics of MDD. We study under what conditions the model can account for the occurrence and recurrence of depressive episodes and how we can model the effects of antidepressant treatments and cognitive behavioral therapy within the same dynamical systems model through changing a small subset of parameters. Our computational modeling suggests several predictions about MDD. Patients who suffer from depression can be divided into two sub-populations: a high-risk sub-population that has a high risk of developing chronic depression and a low-risk sub-population, in which patients develop depression stochastically with low probability. The success of antidepressant treatment is stochastic, leading to widely different times-to-remission in otherwise identical patients. While the specific details of our model might be subjected to criticism and revisions, our approach shows the potential power of computationally modeling depression and the need for different type of quantitative data for understanding depression. PMID:25330102
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Axdahl, E. L.
2017-01-01
Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.
Conrad, Karen S; Jordan, Christopher D; Brown, Kenneth L; Brunold, Thomas C
2015-04-20
5'-deoxyadenosylcobalamin (coenzyme B12, AdoCbl) serves as the cofactor for several enzymes that play important roles in fermentation and catabolism. All of these enzymes initiate catalysis by promoting homolytic cleavage of the cofactor's Co-C bond in response to substrate binding to their active sites. Despite considerable research efforts, the role of the lower axial ligand in facilitating Co-C bond homolysis remains incompletely understood. In the present study, we characterized several derivatives of AdoCbl and its one-electron reduced form, Co(II)Cbl, by using electronic absorption and magnetic circular dichroism spectroscopies. To complement our experimental data, we performed computations on these species, as well as additional Co(II)Cbl analogues. The geometries of all species investigated were optimized using a quantum mechanics/molecular mechanics method, and the optimized geometries were used to compute absorption spectra with time-dependent density functional theory. Collectively, our results indicate that a reduction in the basicity of the lower axial ligand causes changes to the cofactor's electronic structure in the Co(II) state that replicate the effects seen upon binding of Co(II)Cbl to Class I isomerases, which replace the lower axial dimethylbenzimidazole ligand of AdoCbl with a protein-derived histidine (His) residue. Such a reduction of the basicity of the His ligand in the enzyme active site may be achieved through proton uptake by the catalytic triad of conserved residues, DXHXGXK, during Co-C bond homolysis.
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Temporal Imagery. An Approach to Reasoning about Time for Planning and Problem Solving.
1985-10-01
about protections ......... .. 97 3.6 Hypothesis generation and abductive inference .................... 98 3.7 Facilities for automatic projection and...events, and simultaneous actions. It you’re not careful, you can waste a considerable amount of effort just determining whether or not two points are or...the planner may construct "some plan, it may also ignore opportunities for merging tasks and con- solidating effort. My main objection, however, is
Intraoperative Radiation Therapy: Characterization and Application
1989-03-01
difficult to obtain. Notably, carcinomas of the pancreas, stomach, colon, and rectum, and sarcomas of soft tissue are prime candidates for IORT (2:131...Their pioneering efforts served as the basis for all my work. Mr. John Brohas of the AFIT Model Fabrication Shop aided my efforts considerably by... fabricated to set the collimator jaws to the required 10 cm x 10 cm aperture. The necessary parts are available from Varian. This will help eliminate errors
Hierarchical Architectural Considerations in Econometric Modeling of Manufacturing Systems
1981-06-01
behavioral factors must also be considered. A proposed economic model, to be aligned with ICAM program intentions, should be generic and have the... relevant to the effort and to identify contractors, if any, involved in economic model development. Due to the nature of involvement of other con...tractors with the ICAM program office, information which was thought relevant to the initiation of the current effort was in a lag-time and was
41 CFR 109-42.1102-51 - Suspect personal property.
Code of Federal Regulations, 2011 CFR
2011-01-01
... excess. (b) With due consideration to the economic factors involved, every effort shall be made to reduce the level of contamination of excess or surplus personal property to the lowest practicable level...
Because chemicals can adversely affect cognitive function in humans, considerable effort has been made to characterize their effects using animal models. Information from such models will be necessary to: evaluate whether chemicals identified as potentially neurotoxic by screenin...
Nuclear measurement of subgrade moisture.
DOT National Transportation Integrated Search
1973-01-01
The basic consideration in evaluating subgrade moisture conditions under pavements is the selection of a method of determining moisture contents that is sufficiently accurate and can be used with minimal effort, interference with traffic, and recalib...
Validation of Model Simulations of Anvil Cirrus Properties During TWP-ICE: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zipser, Edward J.
2013-05-20
This 3-year grant, with two extensions, resulted in a successful 5-year effort, led by Ph.D. student Adam Varble, to compare cloud resolving model (CRM) simulations with the excellent database obtained during the TWP-ICE field campaign. The objective, largely achieved, is to undertake these comparisons comprehensively and quantitatively, informing the community in ways that goes beyond pointing out errors in the models, but points out ways to improve both cloud dynamics and microphysics parameterizations in future modeling efforts. Under DOE support, Adam Varble, with considerable assistance from Dr. Ann Fridlind and others, entrained scientists who ran some 10 different CRMs andmore » 4 different limited area models (LAMs) using a variety of microphysics parameterizations, to ensure that the conclusions of the study will have considerable generality.« less
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
Assessing performance of an Electronic Health Record (EHR) using Cognitive Task Analysis.
Saitwal, Himali; Feng, Xuan; Walji, Muhammad; Patel, Vimla; Zhang, Jiajie
2010-07-01
Many Electronic Health Record (EHR) systems fail to provide user-friendly interfaces due to the lack of systematic consideration of human-centered computing issues. Such interfaces can be improved to provide easy to use, easy to learn, and error-resistant EHR systems to the users. To evaluate the usability of an EHR system and suggest areas of improvement in the user interface. The user interface of the AHLTA (Armed Forces Health Longitudinal Technology Application) was analyzed using the Cognitive Task Analysis (CTA) method called GOMS (Goals, Operators, Methods, and Selection rules) and an associated technique called KLM (Keystroke Level Model). The GOMS method was used to evaluate the AHLTA user interface by classifying each step of a given task into Mental (Internal) or Physical (External) operators. This analysis was performed by two analysts independently and the inter-rater reliability was computed to verify the reliability of the GOMS method. Further evaluation was performed using KLM to estimate the execution time required to perform the given task through application of its standard set of operators. The results are based on the analysis of 14 prototypical tasks performed by AHLTA users. The results show that on average a user needs to go through 106 steps to complete a task. To perform all 14 tasks, they would spend about 22 min (independent of system response time) for data entry, of which 11 min are spent on more effortful mental operators. The inter-rater reliability analysis performed for all 14 tasks was 0.8 (kappa), indicating good reliability of the method. This paper empirically reveals and identifies the following finding related to the performance of AHLTA: (1) large number of average total steps to complete common tasks, (2) high average execution time and (3) large percentage of mental operators. The user interface can be improved by reducing (a) the total number of steps and (b) the percentage of mental effort, required for the tasks. 2010 Elsevier Ireland Ltd. All rights reserved.
Moon, Michael C; Greenberg, Roy K; Morales, Jose P; Martin, Zenia; Lu, Qingsheng; Dowdall, Joseph F; Hernandez, Adrian V
2011-04-01
Proximal aortic dissections are life-threatening conditions that require immediate surgical intervention to avert an untreated mortality rate that approaches 50% at 48 hours. Advances in computed tomography (CT) imaging techniques have permitted increased characterization of aortic dissection that are necessary to assess the design and applicability of new treatment paradigms. All patients presenting during a 2-year period with acute proximal aortic dissections who underwent CT scanning were reviewed in an effort to establish a detailed assessment of their aortic anatomy. Imaging studies were assessed in an effort to document the location of the primary proximal fenestration, the proximal and distal extent of the dissection, and numerous morphologic measurements pertaining to the aortic valve, root, and ascending aorta to determine the potential for an endovascular exclusion of the ascending aorta. During the study period, 162 patients presented with proximal aortic dissections. Digital high-resolution preoperative CT imaging was performed on 76 patients, and 59 scans (77%) were of adequate quality to allow assessment of anatomic suitability for treatment with an endograft. In all cases, the dissection plane was detectable, yet the primary intimal fenestration was identified in only 41% of the studies. Scans showed 24 patients (32%) appeared to be anatomically amenable to such a repair (absence of valvular involvement, appropriate length and diameter of proximal sealing regions, lack of need to occlude coronary vasculature). Of the 42 scans that were determined not to be favorable for endovascular repair, the most common exclusion finding was the absence of a proximal landing zone (n = 15; 36%). Appropriately protocoled CT imaging provides detailed anatomic information about the aortic root and ascending aorta, allowing the assessment of which dissections have proximal fenestrations that may be amenable to an endovascular repair. Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
Ground states of larger nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, S.C.; Wiringa, R.B.; Pandharipande, V.R.
1995-08-01
The methods used for the few-body nuclei require operations on the complete spin-isospin vector; the size of this vector makes such methods impractical for nuclei with A > 8. During the last few years we developed cluster expansion methods that do not require operations on the complete vector. We use the same Hamiltonians as for the few-body nuclei and variational wave functions of form similar to the few-body wave functions. The cluster expansions are made for the noncentral parts of the wave functions and for the operators whose expectation values are being evaluated. The central pair correlations in the wavemore » functions are treated exactly and this requires the evaluation of 3A-dimensional integrals which are done with Monte Carlo techniques. Most of our effort was on {sup 16}O, other p-shell nuclei, and {sup 40}Ca. In 1993 the Mathematics and Computer Science Division acquired a 128-processor IBM SP which has a theoretical peak speed of 16 Gigaflops (GFLOPS). We converted our program to run on this machine. Because of the large memory on each node of the SP, it was easy to convert the program to parallel form with very low communication overhead. Considerably more effort was needed to restructure the program from one oriented towards long vectors for the Cray computers at NERSC to one that makes efficient use of the cache of the RS6000 architecture. The SP made possible complete five-body cluster calculations of {sup 16}O for the first time; previously we could only do four-body cluster calculations. These calculations show that the expectation value of the two-body potential is converging less rapidly than we had thought, while that of the three-body potential is more rapidly convergent; the net result is no significant change to our predicted binding energy for {sup 16}O using the new Argonne v{sub 18} potential and the Urbana IX three-nucleon potential. This result is in good agreement with experiment.« less
Embedding impedance approximations in the analysis of SIS mixers
NASA Technical Reports Server (NTRS)
Kerr, A. R.; Pan, S.-K.; Withington, S.
1992-01-01
Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.
Structural Inference in the Art of Violin Making.
NASA Astrophysics Data System (ADS)
Morse-Fortier, Leonard Joseph
The "secrets" of success of early Italian violins have long been sought. Among their many efforts to reproduce the results of Stradiveri, Guarneri, and Amati, luthiers have attempted to order and match natural resonant frequencies in the free violin plates. This tap-tone plate tuning technique is simply an eigenvalue extraction scheme. In the final stages of carving, the violin maker complements considerable intuitive knowledge of violin plate structure and of modal attributes with tap-tone frequency estimates to better understand plate structure and to inform decisions about plate carving and completeness. Examining the modal attributes of violin plates, this work develops and incorporates an impulse-response scheme for modal inference, measures resonant frequencies and modeshapes for a pair of violin plates, and presents modeshapes through a unique computer visualization scheme developed specifically for this purpose. The work explores, through simple examples questions of how plate modal attributes reflect underlying structure, and questions about the so -called evolution of modeshapes and frequencies through assembly of the violin. Separately, the work develops computer code for a carved, anisotropic, plate/shell finite element. Solutions are found to the static displacement and free-vibration eigenvalue problems for an orthotropic plate, and used to verify element accuracy. Finally, a violin back plate is modelled with full consideration of plate thickness and arching. Model estimates for modal attributes compare very well against experimentally acquired values. Finally, the modal synthesis technique is applied to predicting the modal attributes of the violin top plate with ribs attached from those of the top plate alone, and with an estimate of rib mass and stiffness. This last analysis serves to verify the modal synthesis method, and to quantify its limits of applicability in attempting to solve problems with severe structural modification. Conclusions emphasize the importance of better understanding the underlying structure, improved understanding of its relationship to modal attributes, and better estimates of wood elasticity.
Staff | Computational Science | NREL
develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High
ERIC Educational Resources Information Center
Dasuki, Salihu Ibrahim; Ogedebe, Peter; Kanya, Rislana Abdulazeez; Ndume, Hauwa; Makinde, Julius
2015-01-01
Efforts are been made by Universities in developing countries to ensure that it's graduate are not left behind in the competitive global information society; thus have adopted international computing curricular for their computing degree programs. However, adopting these international curricula seem to be very challenging for developing countries…
Automated computer grading of hardwood lumber
P. Klinkhachorn; J.P. Franklin; Charles W. McMillin; R.W. Conners; H.A. Huber
1988-01-01
This paper describes an improved computer program to grade hardwood lumber. The program was created as part of a system to automate various aspects of the hardwood manufacturing industry. It enhances previous efforts by considering both faces of the board and provides easy application of species dependent rules. The program can be readily interfaced with a computer...
Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"
ERIC Educational Resources Information Center
Romiszowski, Alexander J.
2012-01-01
"Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…
The Use of Computers in the Math Classroom.
ERIC Educational Resources Information Center
Blass, Barbara; And Others
In an effort to increase faculty use and knowledge of computers, Oakland Community College (OCC), in Michigan, developed a Summer Technology Institute (STI), and a Computer Technology Grants (CTG) project beginning in 1989. The STI involved 3-day forums during summers 1989, 1990, and 1991 to expose faculty to hardware and software applications.…
Commentary: It Is Not Only about the Computers--An Argument for Broadening the Conversation
ERIC Educational Resources Information Center
DeWitt, Scott W.
2006-01-01
In 2002 the members of the National Technology Leadership Initiative (NTLI) framed seven conclusions relating to handheld computers and ubiquitous computing in schools. While several of the conclusions are laudable efforts to increase research and professional development, the factual and conceptual bases for this document are seriously flawed.…
The Relationship between Computational Fluency and Student Success in General Studies Mathematics
ERIC Educational Resources Information Center
Hegeman, Jennifer; Waters, Gavin
2012-01-01
Many developmental mathematics programs emphasize computational fluency with the assumption that this is a necessary contributor to student success in general studies mathematics. In an effort to determine which skills are most essential, scores on a computational fluency test were correlated with student success in general studies mathematics at…
NASA Technical Reports Server (NTRS)
Iida, H. T.
1966-01-01
Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.
Computer-Based Simulations for Maintenance Training: Current ARI Research. Technical Report 544.
ERIC Educational Resources Information Center
Knerr, Bruce W.; And Others
Three research efforts that used computer-based simulations for maintenance training were in progress when this report was written: Game-Based Learning, which investigated the use of computer-based games to train electronics diagnostic skills; Human Performance in Fault Diagnosis Tasks, which evaluated the use of context-free tasks to train…
Computational protein design-the next generation tool to expand synthetic biology applications.
Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel
2018-05-02
One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.
Design and implementation of a Windows NT network to support CNC activities
NASA Technical Reports Server (NTRS)
Shearrow, C. A.
1996-01-01
The Manufacturing, Materials, & Processes Technology Division is undergoing dramatic changes to bring it's manufacturing practices current with today's technological revolution. The Division is developing Computer Automated Design and Computer Automated Manufacturing (CAD/CAM) abilities. The development of resource tracking is underway in the form of an accounting software package called Infisy. These two efforts will bring the division into the 1980's in relationship to manufacturing processes. Computer Integrated Manufacturing (CIM) is the final phase of change to be implemented. This document is a qualitative study and application of a CIM application capable of finishing the changes necessary to bring the manufacturing practices into the 1990's. The documentation provided in this qualitative research effort includes discovery of the current status of manufacturing in the Manufacturing, Materials, & Processes Technology Division including the software, hardware, network and mode of operation. The proposed direction of research included a network design, computers to be used, software to be used, machine to computer connections, estimate a timeline for implementation, and a cost estimate. Recommendation for the division's improvement include action to be taken, software to utilize, and computer configurations.
Computational Aerodynamic Simulations of a Spacecraft Cabin Ventilation Fan Design
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2010-01-01
Quieter working environments for astronauts are needed if future long-duration space exploration missions are to be safe and productive. Ventilation and payload cooling fans are known to be dominant sources of noise, with the International Space Station being a good case in point. To address this issue cost effectively, early attention to fan design, selection, and installation has been recommended, leading to an effort by NASA to examine the potential for small-fan noise reduction by improving fan aerodynamic design. As a preliminary part of that effort, the aerodynamics of a cabin ventilation fan designed by Hamilton Sundstrand has been simulated using computational fluid dynamics codes, and the computed solutions analyzed to quantify various aspects of the fan aerodynamics and performance. Four simulations were performed at the design rotational speed: two at the design flow rate and two at off-design flow rates. Following a brief discussion of the computational codes, various aerodynamic- and performance-related quantities derived from the computed flow fields are presented along with relevant flow field details. The results show that the computed fan performance is in generally good agreement with stated design goals.
F‐GHG Emissions Reduction Efforts: FY2015 Supplier Profiles
The Supplier Profiles outlined in this document detail the efforts of large‐area flat panel suppliers to reduce their F‐GHG emissions in manufacturing facilities that make today’s large‐area panels used for products such as TVs and computer monitors.
F‐GHG Emissions Reduction Efforts: FY2016 Supplier Profiles
The Supplier Profiles outlined in this document detail the efforts of large‐area flat panel suppliers to reduce their F‐GHG emissions in manufacturing facilities that make today’s large‐area panels used for products such as TVs and computer monitors.
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Operating manual for coaxial injection combustion model. [for the space shuttle main engine
NASA Technical Reports Server (NTRS)
Sutton, R. D.; Schuman, M. D.; Chadwick, W. D.
1974-01-01
An operating manual for the coaxial injection combustion model (CICM) is presented as the final report for an eleven month effort designed to provide improvement, to verify, and to document the comprehensive computer program for analyzing the performance of thrust chamber operation with gas/liquid coaxial jet injection. The effort culminated in delivery of an operation FORTRAN IV computer program and associated documentation pertaining to the combustion conditions in the space shuttle main engine. The computer program is structured for compatibility with the standardized Joint Army-Navy-NASA-Air Force (JANNAF) performance evaluation procedure. Use of the CICM in conjunction with the JANNAF procedure allows the analysis of engine systems using coaxial gas/liquid injection.
Use of MCIDAS as an earth science information systems tool
NASA Technical Reports Server (NTRS)
Goodman, H. Michael; Karitani, Shogo; Parker, Karen G.; Stooksbury, Laura M.; Wilson, Gregory S.
1988-01-01
The application of the man computer interactive data access system (MCIDAS) to information processing is examined. The computer systems that interface with the MCIDAS are discussed. Consideration is given to the computer networking of MCIDAS, data base archival, and the collection and distribution of real-time special sensor microwave/imager data.
Learning Oceanography from a Computer Simulation Compared with Direct Experience at Sea
ERIC Educational Resources Information Center
Winn, William; Stahr, Frederick; Sarason, Christian; Fruland, Ruth; Oppenheimer, Peter; Lee, Yen-Ling
2006-01-01
Considerable research has compared how students learn science from computer simulations with how they learn from "traditional" classes. Little research has compared how students learn science from computer simulations with how they learn from direct experience in the real environment on which the simulations are based. This study compared two…
Factors Affecting Teachers' Adoption of Educational Computer Games: A Case Study
ERIC Educational Resources Information Center
Kebritchi, Mansureh
2010-01-01
Even though computer games hold considerable potential for engaging and facilitating learning among today's children, the adoption of modern educational computer games is still meeting significant resistance in K-12 education. The purpose of this paper is to inform educators and instructional designers on factors affecting teachers' adoption of…
Designing a Curriculum for Computer Students in the Community College.
ERIC Educational Resources Information Center
Kolatis, Maria
An overview is provided of the institutional and technological factors to be considered in designing or updating a computer science curriculum at the community college level. After underscoring the importance of the computer in today's society, the paper identifies and discusses the following considerations in curriculum design: (1) the mission of…
Computational techniques in tribology and material science at the atomic level
NASA Technical Reports Server (NTRS)
Ferrante, J.; Bozzolo, G. H.
1992-01-01
Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.
Forty years of collaborative computational crystallography.
Agirre, Jon; Dodson, Eleanor
2018-01-01
A brief overview is provided of the history of collaborative computational crystallography, with an emphasis on the Collaborative Computational Project No. 4. The key steps in its development are outlined, with consideration also given to the underlying reasons which contributed, and ultimately led to, the unprecedented success of this venture. © 2017 The Protein Society.
Factors Influencing Skilled Use of the Computer Mouse by School-Aged Children
ERIC Educational Resources Information Center
Lane, Alison E.; Ziviani, Jenny M.
2010-01-01
Effective use of computers in education for children requires consideration of individual and developmental characteristics of users. There is limited empirical evidence, however, to guide educational programming when it comes to children and their acquisition of computing skills. This paper reports on the influence of previous experience and…
Computer Software: Copyright and Licensing Considerations for Schools and Libraries. ERIC Digest.
ERIC Educational Resources Information Center
Reed, Mary Hutchings
This digest notes that the terms and conditions of computer software package license agreements control the use of software in schools and libraries, and examines the implications of computer software license agreements for classroom use and for library lending policies. Guidelines are provided for interpreting the Copyright Act, and insuring the…
Handheld Computers: A Boon for Principals
ERIC Educational Resources Information Center
Brazell, Wayne
2005-01-01
As I reflect on my many years as an elementary school principal, I realize how much more effective I would have been if I had owned a wireless handheld computer. This relatively new technology can provide considerable assistance to today?s principals and recent advancements have increased its functions and capacity. Handheld computers are…
Design and Curriculum Considerations for a Computer Graphics Program in the Arts.
ERIC Educational Resources Information Center
Leeman, Ruedy W.
This history and state-of-the-art review of computer graphics describes computer graphics programs and proposed programs at Sheridan College (Canada), the Rhode Island School of Design, the University of Oregon, Northern Illinois University, and Ohio State University. These programs are discussed in terms of their philosophy, curriculum, student…
METRO-APEX Volume 2.1: Computer Operator's Manual. Revised.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. COMEX Research Project.
The Computer Operator's Manual is one of a set of twenty-one manuals used in METRO-APEX 1974, a computerized college and professional level, computer-supported, role-play, simulation exercise of a community with "normal" problems. Stress is placed on environmental quality considerations. APEX 1974 is an expansion of APEX--Air Pollution…