Models of optical quantum computing
NASA Astrophysics Data System (ADS)
Krovi, Hari
2017-03-01
I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.
ERIC Educational Resources Information Center
Gattiker, Urs E.; And Others
It is expected that by 1990 the majority of clerical and managerial workers in North America will use computers in their daily work. An integrative model was developed which views quality of work life as an ever changing dimension influenced by computerization and by perception of career success and non-work factors. To test this model, a study…
The Computational and Neural Basis of Cognitive Control: Charted Territory and New Frontiers
ERIC Educational Resources Information Center
Botvinick, Matthew M.; Cohen, Jonathan D.
2014-01-01
Cognitive control has long been one of the most active areas of computational modeling work in cognitive science. The focus on computational models as a medium for specifying and developing theory predates the PDP books, and cognitive control was not one of the areas on which they focused. However, the framework they provided has injected work on…
Enduring Influence of Stereotypical Computer Science Role Models on Women's Academic Aspirations
ERIC Educational Resources Information Center
Cheryan, Sapna; Drury, Benjamin J.; Vichayapai, Marissa
2013-01-01
The current work examines whether a brief exposure to a computer science role model who fits stereotypes of computer scientists has a lasting influence on women's interest in the field. One-hundred undergraduate women who were not computer science majors met a female or male peer role model who embodied computer science stereotypes in appearance…
Collective Computation of Neural Network
1990-03-15
Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model
Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities
ERIC Educational Resources Information Center
Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian
2018-01-01
In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…
Challenges facing developers of CAD/CAM models that seek to predict human working postures
NASA Astrophysics Data System (ADS)
Wiker, Steven F.
2005-11-01
This paper outlines the need for development of human posture prediction models for Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) design applications in product, facility and work design. Challenges facing developers of posture prediction algorithms are presented and discussed.
Approaches to Classroom-Based Computational Science.
ERIC Educational Resources Information Center
Guzdial, Mark
Computational science includes the use of computer-based modeling and simulation to define and test theories about scientific phenomena. The challenge for educators is to develop techniques for implementing computational science in the classroom. This paper reviews some previous work on the use of simulation alone (without modeling), modeling…
Computer-Based Molecular Modelling: Finnish School Teachers' Experiences and Views
ERIC Educational Resources Information Center
Aksela, Maija; Lundell, Jan
2008-01-01
Modern computer-based molecular modelling opens up new possibilities for chemistry teaching at different levels. This article presents a case study seeking insight into Finnish school teachers' use of computer-based molecular modelling in teaching chemistry, into the different working and teaching methods used, and their opinions about necessary…
ERIC Educational Resources Information Center
Darabi, Aubteen; Nelson, David W.; Meeker, Richard; Liang, Xinya; Boulware, Wilma
2010-01-01
In a diagnostic problem solving operation of a computer-simulated chemical plant, chemical engineering students were randomly assigned to two groups: one studying product-oriented worked examples, the other practicing conventional problem solving. Effects of these instructional strategies on the progression of learners' mental models were examined…
Modeling Teaching with a Computer-Based Concordancer in a TESL Preservice Teacher Education Program.
ERIC Educational Resources Information Center
Gan, Siowck-Lee; And Others
1996-01-01
This study modeled teaching with a computer-based concordancer in a Teaching English-as-a-Second-Language program. Preservice teachers were randomly assigned to work with computer concordancing software or vocabulary exercises to develop word attack skills. Pretesting and posttesting indicated that computer concordancing was more effective in…
Computational modelling of the impact of AIDS on business.
Matthews, Alan P
2007-07-01
An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen
2015-08-01
This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.
A new security model for collaborative environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Deborah; Lorch, Markus; Thompson, Mary
Prevalent authentication and authorization models for distributed systems provide for the protection of computer systems and resources from unauthorized use. The rules and policies that drive the access decisions in such systems are typically configured up front and require trust establishment before the systems can be used. This approach does not work well for computer software that moderates human-to-human interaction. This work proposes a new model for trust establishment and management in computer systems supporting collaborative work. The model supports the dynamic addition of new users to a collaboration with very little initial trust placed into their identity and supportsmore » the incremental building of trust relationships through endorsements from established collaborators. It also recognizes the strength of a users authentication when making trust decisions. By mimicking the way humans build trust naturally the model can support a wide variety of usage scenarios. Its particular strength lies in the support for ad-hoc and dynamic collaborations and the ubiquitous access to a Computer Supported Collaboration Workspace (CSCW) system from locations with varying levels of trust and security.« less
Computational Phenotyping in Psychiatry: A Worked Example
2016-01-01
Abstract Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology—structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry. PMID:27517087
Computational Phenotyping in Psychiatry: A Worked Example.
Schwartenbeck, Philipp; Friston, Karl
2016-01-01
Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry.
Study of basic computer competence among public health nurses in Taiwan.
Yang, Kuei-Feng; Yu, Shu; Lin, Ming-Sheng; Hsu, Chia-Ling
2004-03-01
Rapid advances in information technology and media have made distance learning on the Internet possible. This new model of learning allows greater efficiency and flexibility in knowledge acquisition. Since basic computer competence is a prerequisite for this new learning model, this study was conducted to examine the basic computer competence of public health nurses in Taiwan and explore factors influencing computer competence. A national cross-sectional randomized study was conducted with 329 public health nurses. A questionnaire was used to collect data and was delivered by mail. Results indicate that basic computer competence of public health nurses in Taiwan is still needs to be improved (mean = 57.57 +- 2.83, total score range from 26-130). Among the five most frequently used software programs, nurses were most knowledgeable about Word and least knowledgeable about PowerPoint. Stepwise multiple regression analysis revealed eight variables (weekly number of hours spent online at home, weekly amount of time spent online at work, weekly frequency of computer use at work, previous computer training, computer at workplace and Internet access, job position, education level, and age) that significantly influenced computer competence, which accounted for 39.0 % of the variance. In conclusion, greater computer competence, broader educational programs regarding computer technology, and a greater emphasis on computers at work are necessary to increase the usefulness of distance learning via the Internet in Taiwan. Building a user-friendly environment is important in developing this new media model of learning for the future.
Program listing for the REEDM (Rocket Exhaust Effluent Diffusion Model) computer program
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.; Dumbauld, R. K.; Cheney, C. S.; Geary, H. V.
1982-01-01
The program listing for the REEDM Computer Program is provided. A mathematical description of the atmospheric dispersion models, cloud-rise models, and other formulas used in the REEDM model; vehicle and source parameters, other pertinent physical properties of the rocket exhaust cloud and meteorological layering techniques; user's instructions for the REEDM computer program; and worked example problems are contained in NASA CR-3646.
Huysmans, Maaike A; Eijckelhof, Belinda H W; Garza, Jennifer L Bruno; Coenen, Pieter; Blatter, Birgitte M; Johnson, Peter W; van Dieën, Jaap H; van der Beek, Allard J; Dennerlein, Jack T
2017-12-15
Alternative techniques to assess physical exposures, such as prediction models, could facilitate more efficient epidemiological assessments in future large cohort studies examining physical exposures in relation to work-related musculoskeletal symptoms. The aim of this study was to evaluate two types of models that predict arm-wrist-hand physical exposures (i.e. muscle activity, wrist postures and kinematics, and keyboard and mouse forces) during computer use, which only differed with respect to the candidate predicting variables; (i) a full set of predicting variables, including self-reported factors, software-recorded computer usage patterns, and worksite measurements of anthropometrics and workstation set-up (full models); and (ii) a practical set of predicting variables, only including the self-reported factors and software-recorded computer usage patterns, that are relatively easy to assess (practical models). Prediction models were build using data from a field study among 117 office workers who were symptom-free at the time of measurement. Arm-wrist-hand physical exposures were measured for approximately two hours while workers performed their own computer work. Each worker's anthropometry and workstation set-up were measured by an experimenter, computer usage patterns were recorded using software and self-reported factors (including individual factors, job characteristics, computer work behaviours, psychosocial factors, workstation set-up characteristics, and leisure-time activities) were collected by an online questionnaire. We determined the predictive quality of the models in terms of R2 and root mean squared (RMS) values and exposure classification agreement to low-, medium-, and high-exposure categories (in the practical model only). The full models had R2 values that ranged from 0.16 to 0.80, whereas for the practical models values ranged from 0.05 to 0.43. Interquartile ranges were not that different for the two models, indicating that only for some physical exposures the full models performed better. Relative RMS errors ranged between 5% and 19% for the full models, and between 10% and 19% for the practical model. When the predicted physical exposures were classified into low, medium, and high, classification agreement ranged from 26% to 71%. The full prediction models, based on self-reported factors, software-recorded computer usage patterns, and additional measurements of anthropometrics and workstation set-up, show a better predictive quality as compared to the practical models based on self-reported factors and recorded computer usage patterns only. However, predictive quality varied largely across different arm-wrist-hand exposure parameters. Future exploration of the relation between predicted physical exposure and symptoms is therefore only recommended for physical exposures that can be reasonably well predicted. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, L.; Notkin, D.; Adams, L.
1990-03-31
This task relates to research on programming massively parallel computers. Previous work on the Ensamble concept of programming was extended and investigation into nonshared memory models of parallel computation was undertaken. Previous work on the Ensamble concept defined a set of programming abstractions and was used to organize the programming task into three distinct levels; Composition of machine instruction, composition of processes, and composition of phases. It was applied to shared memory models of computations. During the present research period, these concepts were extended to nonshared memory models. During the present research period, one Ph D. thesis was completed, onemore » book chapter, and six conference proceedings were published.« less
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
DOT National Transportation Integrated Search
2009-03-01
This project contains three major parts. In the first part a digital computer simulation model was developed with the aim to model the traffic through a freeway work zone situation. The model was based on the Arena simulation software and used cumula...
DOT National Transportation Integrated Search
2009-03-01
This project contains three major parts. In the first part a digital computer simulation model was developed with the aim to model the traffic through a freeway work zone situation. The model was based on the Arena simulation software and used cumula...
Computer model for economic study of unbleached kraft paperboard production
Peter J. Ince
1984-01-01
Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...
Computational Modeling of Sinkage of Objects into Porous Bed under Cyclic Loading
NASA Astrophysics Data System (ADS)
Sheikh, B.; Qiu, T.; Liu, X.
2017-12-01
This work is a companion of another abstract submitted to this session on the computational modeling for the prediction of underwater munitions. In the other abstract, the focus is the hydrodynamics and sediment transport. In this work, the focus is on the geotechnical aspect and granular material behavior when the munitions interact with the porous bed. The final goal of the project is to create and utilize a comprehensive modeling framework, which integrates the flow and granular material models, to simulate and investigate the motion of the munitions. In this work, we present the computational modeling of one important process: the sinkage of rigid-body objects into porous bed under cyclic loading. To model the large deformation of granular bed materials around sinking objects under cyclic loading, a rate-independent elasto-plastic constitutive model is implemented into a Smoothed Particle Hydrodynamics (SPH) model. The effect of loading conditions (e.g., amplitude and frequency of shaking), object properties (e.g., geometry and density), and granular bed material properties (e.g., density) on object singkage is discussed.
NASA Astrophysics Data System (ADS)
Chu, A.
2016-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work implements three of the homogeneous ETAS models described in Ogata (1998). With a model's log-likelihood function, my software finds the Maximum-Likelihood Estimates (MLEs) of the model's parameters to estimate the homogeneous background rate and the temporal and spatial parameters that govern triggering effects. EM-algorithm is employed for its advantages of stability and robustness (Veen and Schoenberg, 2008). My work also presents comparisons among the three models in robustness, convergence speed, and implementations from theory to computing practice. Up-to-date regional seismic data of seismic active areas such as Southern California and Japan are used to demonstrate the comparisons. Data analysis has been done using computer languages Java and R. Java has the advantages of being strong-typed and easiness of controlling memory resources, while R has the advantages of having numerous available functions in statistical computing. Comparisons are also made between the two programming languages in convergence and stability, computational speed, and easiness of implementation. Issues that may affect convergence such as spatial shapes are discussed.
Computationally modeling interpersonal trust.
Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David
2013-01-01
We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.
The rise of machine consciousness: studying consciousness with computational models.
Reggia, James A
2013-08-01
Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Scaffolding Learning by Modelling: The Effects of Partially Worked-out Models
ERIC Educational Resources Information Center
Mulder, Yvonne G.; Bollen, Lars; de Jong, Ton; Lazonder, Ard W.
2016-01-01
Creating executable computer models is a potentially powerful approach to science learning. Learning by modelling is also challenging because students can easily get overwhelmed by the inherent complexities of the task. This study investigated whether offering partially worked-out models can facilitate students' modelling practices and promote…
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Moses, Gregory
2017-10-01
The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.
NASA Technical Reports Server (NTRS)
2000-01-01
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
ERIC Educational Resources Information Center
Carrejo, David; Robertson, William H.
2011-01-01
Computer-based mathematical modeling in physics is a process of constructing models of concepts and the relationships between them in the scientific characteristics of work. In this manner, computer-based modeling integrates the interactions of natural phenomenon through the use of models, which provide structure for theories and a base for…
Generative Computer-Assisted Instruction and Artificial Intelligence. Report No. 5.
ERIC Educational Resources Information Center
Sinnott, Loraine T.
This paper reviews the state-of-the-art in generative computer-assisted instruction and artificial intelligence. It divides relevant research into three areas of instructional modeling: models of the subject matter; models of the learner's state of knowledge; and models of teaching strategies. Within these areas, work sponsored by Advanced…
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
Investigation of chemically-reacting supersonic internal flows
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.
1985-01-01
This report covers work done on the research project Analysis and Computation of Internal Flow Field in a Scramjet Engine. The work is supported by the NASA Langley Research Center (Computational Methods Branch of the High-Speed Aerodynamics Division) through research grant NAG1-423. The governing equations of two-dimensional chemically-reacting flows are presented together with the global two-step chemistry model. The finite-difference algorithm used is illustrated and the method of circumventing the stiffness is discussed. The computer program developed is used to solve two model problems of a premixed chemically-reacting flow. The results obtained are physically reasonable.
Experimental and Computational Investigation of Triple-rotating Blades in a Mower Deck
NASA Astrophysics Data System (ADS)
Chon, Woochong; Amano, Ryoichi S.
Experimental and computational studies were performed on the 1.27m wide three-spindle lawn mower deck with side discharge arrangement. Laser Doppler Velocimetry was used to measure the air velocity at 12 different sections under the mower deck. The high-speed video camera test provided valuable visual evidence of airflow and grass discharge patterns. The strain gages were attached at several predetermined locations of the mower blades to measure the strain. In computational fluid dynamics work, computer based analytical studies were performed. During this phase of work, two different trials were attempted. First, two-dimensional blade shapes at several arbitrary radial sections were selected for flow computations around the blade model. Finally, a three-dimensional full deck model was developed and compared with the experimental results.
Motion Planning in a Society of Intelligent Mobile Agents
NASA Technical Reports Server (NTRS)
Esterline, Albert C.; Shafto, Michael (Technical Monitor)
2002-01-01
The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.
Thermodynamic model effects on the design and optimization of natural gas plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, S.; Zabaloy, M.; Brignole, E.A.
1999-07-01
The design and optimization of natural gas plants is carried out on the basis of process simulators. The physical property package is generally based on cubic equations of state. By rigorous thermodynamics phase equilibrium conditions, thermodynamic functions, equilibrium phase separations, work and heat are computed. The aim of this work is to analyze the NGL turboexpansion process and identify possible process computations that are more sensitive to model predictions accuracy. Three equations of state, PR, SRK and Peneloux modification, are used to study the effect of property predictions on process calculations and plant optimization. It is shown that turboexpander plantsmore » have moderate sensitivity with respect to phase equilibrium computations, but higher accuracy is required for the prediction of enthalpy and turboexpansion work. The effect of modeling CO{sub 2} solubility is also critical in mixtures with high CO{sub 2} content in the feed.« less
Statistical models of lunar rocks and regolith
NASA Technical Reports Server (NTRS)
Marcus, A. H.
1973-01-01
The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.
ERIC Educational Resources Information Center
Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana
2014-01-01
The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…
Collaborative Working Architecture for IoT-Based Applications.
Mora, Higinio; Signes-Pont, María Teresa; Gil, David; Johnsson, Magnus
2018-05-23
The new sensing applications need enhanced computing capabilities to handle the requirements of complex and huge data processing. The Internet of Things (IoT) concept brings processing and communication features to devices. In addition, the Cloud Computing paradigm provides resources and infrastructures for performing the computations and outsourcing the work from the IoT devices. This scenario opens new opportunities for designing advanced IoT-based applications, however, there is still much research to be done to properly gear all the systems for working together. This work proposes a collaborative model and an architecture to take advantage of the available computing resources. The resulting architecture involves a novel network design with different levels which combines sensing and processing capabilities based on the Mobile Cloud Computing (MCC) paradigm. An experiment is included to demonstrate that this approach can be used in diverse real applications. The results show the flexibility of the architecture to perform complex computational tasks of advanced applications.
Making classical ground-state spin computing fault-tolerant.
Crosson, I J; Bacon, D; Brown, K R
2010-09-01
We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.
Computer use, symptoms, and quality of life.
Hayes, John R; Sheedy, James E; Stelmack, Joan A; Heaney, Catherine A
2007-08-01
To model the effects of computer use on reported visual and physical symptoms and to measure the effects upon quality of life measures. A survey of 1000 university employees (70.5% adjusted response rate) assessed visual and physical symptoms, job, physical and mental demands, ability to control/influence work, amount of work at a computer, computer work environment, relations with others at work, life and job satisfaction, and quality of life. Data were analyzed to determine whether self-reported eye symptoms are associated with perceived quality of life. The study also explored the factors that are associated with eye symptoms. Structural equation modeling and multiple regression analyses were used to assess the hypotheses. Seventy percent of the employees used some form of vision correction during computer use, 2.9% used glasses specifically prescribed for computer use, and 8% had had refractive surgery. Employees spent an average of 6 h per day at the computer. In a multiple regression framework, the latent variable eye symptoms was significantly associated with a composite quality of life variable (p = 0.02) after adjusting for job quality, job satisfaction, supervisor relations, co-worker relations, mental and physical load of the job, and job demand. Age and gender were not significantly associated with symptoms. After adjusting for age, gender, ergonomics, hours at the computer, and exercise, eye symptoms were significantly associated with physical symptoms (p < 0.001) accounting for 48% of the variance. Environmental variability at work was associated with eye symptoms and eye symptoms demonstrated a significant impact on quality of life and physical symptoms.
2014-06-12
distribution is unlimited. Ballistic-Failure Mechanisms in Gas Metal Arc Welds of Mil A46100 Armor-Grade Steel : A Computational Investigation The views...Welds of Mil A46100 Armor-Grade Steel : A Computational Investigation Report Title In our recent work, a multi-physics computational model for the...introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance
Computational Psychometrics for Modeling System Dynamics during Stressful Disasters.
Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe
2017-01-01
Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen
Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less
Multiagent Modeling and Simulation in Human-Robot Mission Operations Work System Design
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Sims, Michael H.; Shafto, Michael (Technical Monitor)
2001-01-01
This paper describes a collaborative multiagent modeling and simulation approach for designing work systems. The Brahms environment is used to model mission operations for a semi-autonomous robot mission to the Moon at the work practice level. It shows the impact of human-decision making on the activities and energy consumption of a robot. A collaborative work systems design methodology is described that allows informal models, created with users and stakeholders, to be used as input to the development of formal computational models.
The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems
2011-11-01
accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware
Visualization of anthropometric measures of workers in computer 3D modeling of work place.
Mijović, B; Ujević, D; Baksa, S
2001-12-01
In this work, 3D visualization of a work place by means of a computer-made 3D-machine model and computer animation of a worker have been performed. By visualization of 3D characters in inverse kinematic and dynamic relation with the operating part of a machine, the biomechanic characteristics of worker's body have been determined. The dimensions of a machine have been determined by an inspection of technical documentation as well as by direct measurements and recordings of the machine by camera. On the basis of measured body height of workers all relevant anthropometric measures have been determined by a computer program developed by the authors. By knowing the anthropometric measures, the vision fields and the scope zones while forming work places, exact postures of workers while performing technological procedures were determined. The minimal and maximal rotation angles and the translation of upper and lower arm which are basis for the analysis of worker burdening were analyzed. The dimensions of the seized space of a body are obtained by computer anthropometric analysis of movement, e.g. range of arms, position of legs, head, back. The influence of forming of a work place on correct postures of workers during work has been reconsidered and thus the consumption of energy and fatigue can be reduced to a minimum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes andmore » fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.« less
Weighted Watson-Crick automata
NASA Astrophysics Data System (ADS)
Tamrin, Mohd Izzuddin Mohd; Turaev, Sherzod; Sembok, Tengku Mohd Tengku
2014-07-01
There are tremendous works in biotechnology especially in area of DNA molecules. The computer society is attempting to develop smaller computing devices through computational models which are based on the operations performed on the DNA molecules. A Watson-Crick automaton, a theoretical model for DNA based computation, has two reading heads, and works on double-stranded sequences of the input related by a complementarity relation similar with the Watson-Crick complementarity of DNA nucleotides. Over the time, several variants of Watson-Crick automata have been introduced and investigated. However, they cannot be used as suitable DNA based computational models for molecular stochastic processes and fuzzy processes that are related to important practical problems such as molecular parsing, gene disease detection, and food authentication. In this paper we define new variants of Watson-Crick automata, called weighted Watson-Crick automata, developing theoretical models for molecular stochastic and fuzzy processes. We define weighted Watson-Crick automata adapting weight restriction mechanisms associated with formal grammars and automata. We also study the generative capacities of weighted Watson-Crick automata, including probabilistic and fuzzy variants. We show that weighted variants of Watson-Crick automata increase their generative power.
ERIC Educational Resources Information Center
Nikolaidou, Georgia N.
2012-01-01
This exploratory work describes and analyses the collaborative interactions that emerge during computer-based music composition in the primary school. The study draws on socio-cultural theories of learning, originated within Vygotsky's theoretical context, and proposes a new model, namely Computer-mediated Praxis and Logos under Synergy (ComPLuS).…
Exploring the Integration of Computational Modeling in the ASU Modeling Curriculum
NASA Astrophysics Data System (ADS)
Schatz, Michael; Aiken, John; Burk, John; Caballero, Marcos; Douglas, Scott; Thoms, Brian
2012-03-01
We describe the implementation of computational modeling in a ninth grade classroom in the context of the Arizona Modeling Instruction physics curriculum. Using a high-level programming environment (VPython), students develop computational models to predict the motion of objects under a variety of physical situations (e.g., constant net force), to simulate real world phenomenon (e.g., car crash), and to visualize abstract quantities (e.g., acceleration). We discuss how VPython allows students to utilize all four structures that describe a model as given by the ASU Modeling Instruction curriculum. Implications for future work will also be discussed.
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
NASA Astrophysics Data System (ADS)
Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.
2007-09-01
This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.
Simulations with Elaborated Worked Example Modeling: Beneficial Effects on Schema Acquisition
ERIC Educational Resources Information Center
Meier, Debra K.; Reinhard, Karl J.; Carter, David O.; Brooks, David W.
2008-01-01
Worked examples have been effective in enhancing learning outcomes, especially with novice learners. Most of this research has been conducted in laboratory settings. This study examined the impact of embedding elaborated worked example modeling in a computer simulation practice activity on learning achievement among 39 undergraduate students…
The modeling and simulation of visuospatial working memory
Liang, Lina; Zhang, Zhikang
2010-01-01
Camperi and Wang (Comput Neurosci 5:383–405, 1998) presented a network model for working memory that combines intrinsic cellular bistability with the recurrent network architecture of the neocortex. While Fall and Rinzel (Comput Neurosci 20:97–107, 2006) replaced this intrinsic bistability with a biological mechanism-Ca2+ release subsystem. In this study, we aim to further expand the above work. We integrate the traditional firing-rate network with Ca2+ subsystem-induced bistability, amend the synaptic weights and suggest that Ca2+ concentration only increase the efficacy of synaptic input but has nothing to do with the external input for the transient cue. We found that our network model maintained the persistent activity in response to a brief transient stimulus like that of the previous two models and the working memory performance was resistant to noise and distraction stimulus if Ca2+ subsystem was tuned to be bistable. PMID:22132045
Fast associative memory + slow neural circuitry = the computational model of the brain.
NASA Astrophysics Data System (ADS)
Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady
1997-08-01
We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.
Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen
2017-11-01
Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.
Computational Models of Relational Processes in Cognitive Development
ERIC Educational Resources Information Center
Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven
2012-01-01
Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration
2014-06-01
The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.
Role of Computer Assisted Instruction (CAI) in an Introductory Computer Concepts Course.
ERIC Educational Resources Information Center
Skudrna, Vincent J.
1997-01-01
Discusses the role of computer assisted instruction (CAI) in undergraduate education via a survey of related literature and specific applications. Describes an undergraduate computer concepts course and includes appendices of instructions, flowcharts, programs, sample student work in accounting, COBOL instructional model, decision logic in a…
An Educational Approach to Computationally Modeling Dynamical Systems
ERIC Educational Resources Information Center
Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl
2009-01-01
Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…
Impact of Collaborative Work on Technology Acceptance: A Case Study from Virtual Computing
ERIC Educational Resources Information Center
Konak, Abdullah; Kulturel-Konak, Sadan; Nasereddin, Mahdi; Bartolacci, Michael R.
2017-01-01
Aim/Purpose: This paper utilizes the Technology Acceptance Model (TAM) to examine the extent to which acceptance of Remote Virtual Computer Laboratories (RVCLs) is affected by students' technological backgrounds and the role of collaborative work. Background: RVCLs are widely used in information technology and cyber security education to provide…
Ku-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Magnusson, H. G.; Goff, M. F.
1984-01-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
Ku-Band rendezvous radar performance computer simulation model
NASA Astrophysics Data System (ADS)
Magnusson, H. G.; Goff, M. F.
1984-06-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
2017-08-01
access to the GPU for general purpose processing .5 CUDA is designed to work easily with multiple programming languages , including Fortran. CUDA is a...Using Graphics Processing Unit (GPU) Computing by Leelinda P Dawson Approved for public release; distribution unlimited...The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing by Leelinda
ERIC Educational Resources Information Center
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Linking working memory and long-term memory: a computational model of the learning of new words.
Jones, Gary; Gobet, Fernand; Pine, Julian M
2007-11-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory (LTM) has been proposed. In this paper, we present a computational model of children's vocabulary acquisition (EPAM-VOC) that specifies how phonological working memory and LTM interact. The model learns phoneme sequences, which are stored in LTM and mediate how much information can be held in working memory. The model's behaviour is compared with that of children in a new study of NWR, conducted in order to ensure the same nonword stimuli and methodology across ages. EPAM-VOC shows a pattern of results similar to that of children: performance is better for shorter nonwords and for wordlike nonwords, and performance improves with age. EPAM-VOC also simulates the superior performance for single consonant nonwords over clustered consonant nonwords found in previous NWR studies. EPAM-VOC provides a simple and elegant computational account of some of the key processes involved in the learning of new words: it specifies how phonological working memory and LTM interact; makes testable predictions; and suggests that developmental changes in NWR performance may reflect differences in the amount of information that has been encoded in LTM rather than developmental changes in working memory capacity.
Computer model of cardiovascular control system responses to exercise
NASA Technical Reports Server (NTRS)
Croston, R. C.; Rummel, J. A.; Kay, F. J.
1973-01-01
Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.
Computational Psychometrics for Modeling System Dynamics during Stressful Disasters
Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe
2017-01-01
Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future. PMID:28861026
Novel opportunities for computational biology and sociology in drug discovery☆
Yao, Lixia; Evans, James A.; Rzhetsky, Andrey
2013-01-01
Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528
NASA Astrophysics Data System (ADS)
Tucker, Laura Jane
Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.
ERIC Educational Resources Information Center
Sins, Patrick H. M.; van Joolingen, Wouter R.; Savelsbergh, Elwin R.; van Hout-Wolters, Bernadette
2008-01-01
Purpose of the present study was to test a conceptual model of relations among achievement goal orientation, self-efficacy, cognitive processing, and achievement of students working within a particular collaborative task context. The task involved a collaborative computer-based modeling task. In order to test the model, group measures of…
Acoustic backscatter models of fish: Gradual or punctuated evolution
NASA Astrophysics Data System (ADS)
Horne, John K.
2004-05-01
Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Modeling biological problems in computer science: a case study in genome assembly.
Medvedev, Paul
2018-01-30
As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Occupational stress in human computer interaction.
Smith, M J; Conway, F T; Karsh, B T
1999-04-01
There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Computational Aeroelastic Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Sanetrik, Mark D.; Chwalowski, Pawel; Connolly, Joseph
2015-01-01
An overview of NASA's Commercial Supersonic Technology (CST) Aeroservoelasticity (ASE) element is provided with a focus on recent computational aeroelastic analyses of a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The overview includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, unstructured CFD grids, and CFD-based aeroelastic analyses. In addition, a summary of the work involving the development of aeroelastic reduced-order models (ROMs) and the development of an aero-propulso-servo-elastic (APSE) model is provided.
NASA Astrophysics Data System (ADS)
Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam
2016-12-01
Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Mathematical and computational model for the analysis of micro hybrid rocket motor
NASA Astrophysics Data System (ADS)
Stoia-Djeska, Marius; Mingireanu, Florin
2012-11-01
The hybrid rockets use a two-phase propellant system. In the present work we first develop a simplified model of the coupling of the hybrid combustion process with the complete unsteady flow, starting from the combustion port and ending with the nozzle. The physical and mathematical model are adapted to the simulations of micro hybrid rocket motors. The flow model is based on the one-dimensional Euler equations with source terms. The flow equations and the fuel regression rate law are solved in a coupled manner. The platform of the numerical simulations is an implicit fourth-order Runge-Kutta second order cell-centred finite volume method. The numerical results obtained with this model show a good agreement with published experimental and numerical results. The computational model developed in this work is simple, computationally efficient and offers the advantage of taking into account a large number of functional and constructive parameters that are used by the engineers.
Round Girls in Square Computers: Feminist Perspectives on the Aesthetics of Computer Hardware.
ERIC Educational Resources Information Center
Carr-Chellman, Alison A.; Marra, Rose M.; Roberts, Shari L.
2002-01-01
Considers issues related to computer hardware, aesthetics, and gender. Explores how gender has influenced the design of computer hardware and how these gender-driven aesthetics may have worked to maintain, extend, or alter gender distinctions, roles, and stereotypes; discusses masculine media representations; and presents an alternative model.…
Modeling the Learner in Computer-Assisted Instruction
ERIC Educational Resources Information Center
Fletcher, J. D.
1975-01-01
This paper briefly reviews relevant work in four areas: 1) quantitative models of memory; 2) regression models of performance; 3) automation models of performance; and 4) artificial intelligence. (Author/HB)
NASA Technical Reports Server (NTRS)
Connelly, L. C.
1977-01-01
The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.
ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Pine, Julian M.
2008-01-01
Increasing working memory (WM) capacity is often cited as a major influence on children's development and yet WM capacity is difficult to examine independently of long-term knowledge. A computational model of children's nonword repetition (NWR) performance is presented that independently manipulates long-term knowledge and WM capacity to determine…
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
A ferrofluid based energy harvester: Computational modeling, analysis, and experimental validation
NASA Astrophysics Data System (ADS)
Liu, Qi; Alazemi, Saad F.; Daqaq, Mohammed F.; Li, Gang
2018-03-01
A computational model is described and implemented in this work to analyze the performance of a ferrofluid based electromagnetic energy harvester. The energy harvester converts ambient vibratory energy into an electromotive force through a sloshing motion of a ferrofluid. The computational model solves the coupled Maxwell's equations and Navier-Stokes equations for the dynamic behavior of the magnetic field and fluid motion. The model is validated against experimental results for eight different configurations of the system. The validated model is then employed to study the underlying mechanisms that determine the electromotive force of the energy harvester. Furthermore, computational analysis is performed to test the effect of several modeling aspects, such as three-dimensional effect, surface tension, and type of the ferrofluid-magnetic field coupling on the accuracy of the model prediction.
Parallel Computing for Brain Simulation.
Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A
2017-01-01
The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Computer Synthesis Approaches of Hyperboloid Gear Drives with Linear Contact
NASA Astrophysics Data System (ADS)
Abadjiev, Valentin; Kawasaki, Haruhisa
2014-09-01
The computer design has improved forming different type software for scientific researches in the field of gearing theory as well as performing an adequate scientific support of the gear drives manufacture. Here are attached computer programs that are based on mathematical models as a result of scientific researches. The modern gear transmissions require the construction of new mathematical approaches to their geometric, technological and strength analysis. The process of optimization, synthesis and design is based on adequate iteration procedures to find out an optimal solution by varying definite parameters. The study is dedicated to accepted methodology in the creation of soft- ware for the synthesis of a class high reduction hyperboloid gears - Spiroid and Helicon ones (Spiroid and Helicon are trademarks registered by the Illinois Tool Works, Chicago, Ill). The developed basic computer products belong to software, based on original mathematical models. They are based on the two mathematical models for the synthesis: "upon a pitch contact point" and "upon a mesh region". Computer programs are worked out on the basis of the described mathematical models, and the relations between them are shown. The application of the shown approaches to the synthesis of commented gear drives is illustrated.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Finite element simulation of the mechanical impact of computer work on the carpal tunnel syndrome.
Mouzakis, Dionysios E; Rachiotis, George; Zaoutsos, Stefanos; Eleftheriou, Andreas; Malizos, Konstantinos N
2014-09-22
Carpal tunnel syndrome (CTS) is a clinical disorder resulting from the compression of the median nerve. The available evidence regarding the association between computer use and CTS is controversial. There is some evidence that computer mouse or keyboard work, or both are associated with the development of CTS. Despite the availability of pressure measurements in the carpal tunnel during computer work (exposure to keyboard or mouse) there are no available data to support a direct effect of the increased intracarpal canal pressure on the median nerve. This study presents an attempt to simulate the direct effects of computer work on the whole carpal area section using finite element analysis. A finite element mesh was produced from computerized tomography scans of the carpal area, involving all tissues present in the carpal tunnel. Two loading scenarios were applied on these models based on biomechanical data measured during computer work. It was found that mouse work can produce large deformation fields on the median nerve region. Also, the high stressing effect of the carpal ligament was verified. Keyboard work produced considerable and heterogeneous elongations along the longitudinal axis of the median nerve. Our study provides evidence that increased intracarpal canal pressures caused by awkward wrist postures imposed during computer work were associated directly with deformation of the median nerve. Despite the limitations of the present study the findings could be considered as a contribution to the understanding of the development of CTS due to exposure to computer work. Copyright © 2014 Elsevier Ltd. All rights reserved.
An agent-based computational model for tuberculosis spreading on age-structured populations
NASA Astrophysics Data System (ADS)
Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.
2015-06-01
In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.
NASA Technical Reports Server (NTRS)
Mahalingam, Sudhakar; Menart, James A.
2005-01-01
Computational modeling of the plasma located in the discharge chamber of an ion engine is an important activity so that the development and design of the next generation of ion engines may be enhanced. In this work a computational tool called XOOPIC is used to model the primary electrons, secondary electrons, and ions inside the discharge chamber. The details of this computational tool are discussed in this paper. Preliminary results from XOOPIC are presented. The results presented include particle number density distributions for the primary electrons, the secondary electrons, and the ions. In addition the total number of a particular particle in the discharge chamber as a function of time, electric potential maps and magnetic field maps are presented. A primary electron number density plot from PRIMA is given in this paper so that the results of XOOPIC can be compared to it. PRIMA is a computer code that the present investigators have used in much of their previous work that provides results that compare well to experimental results. PRIMA only models the primary electrons in the discharge chamber. Modeling ions and secondary electrons, as well as the primary electrons, will greatly increase our ability to predict different characteristics of the plasma discharge used in an ion engine.
Novel opportunities for computational biology and sociology in drug discovery
Yao, Lixia
2009-01-01
Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadayappan, Ponnuswamy
Exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today's machines. Systems software for exascale machines must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massive data analysis in a highly unreliable hardware environment with billions of threads of execution. We propose a new approach to the data and work distribution model provided by system software based on the unifying formalism of an abstract file system. The proposed hierarchical data model providesmore » simple, familiar visibility and access to data structures through the file system hierarchy, while providing fault tolerance through selective redundancy. The hierarchical task model features work queues whose form and organization are represented as file system objects. Data and work are both first class entities. By exposing the relationships between data and work to the runtime system, information is available to optimize execution time and provide fault tolerance. The data distribution scheme provides replication (where desirable and possible) for fault tolerance and efficiency, and it is hierarchical to make it possible to take advantage of locality. The user, tools, and applications, including legacy applications, can interface with the data, work queues, and one another through the abstract file model. This runtime environment will provide multiple interfaces to support traditional Message Passing Interface applications, languages developed under DARPA's High Productivity Computing Systems program, as well as other, experimental programming models. We will validate our runtime system with pilot codes on existing platforms and will use simulation to validate for exascale-class platforms. In this final report, we summarize research results from the work done at the Ohio State University towards the larger goals of the project listed above.« less
Toward a multiscale modeling framework for understanding serotonergic function
Wong-Lin, KongFatt; Wang, Da-Hui; Moustafa, Ahmed A; Cohen, Jeremiah Y; Nakamura, Kae
2017-01-01
Despite its importance in regulating emotion and mental wellbeing, the complex structure and function of the serotonergic system present formidable challenges toward understanding its mechanisms. In this paper, we review studies investigating the interactions between serotonergic and related brain systems and their behavior at multiple scales, with a focus on biologically-based computational modeling. We first discuss serotonergic intracellular signaling and neuronal excitability, followed by neuronal circuit and systems levels. At each level of organization, we will discuss the experimental work accompanied by related computational modeling work. We then suggest that a multiscale modeling approach that integrates the various levels of neurobiological organization could potentially transform the way we understand the complex functions associated with serotonin. PMID:28417684
Integrating Computer Content into Social Work Curricula: A Model for Planning
ERIC Educational Resources Information Center
Beaulaurier, Richard L.
2005-01-01
While recent CSWE standards focus on the need for including more relevant technological content in social work curricula, they do not offer guidance regarding how it is to be assessed and selected. Social work educators are in need of an analytic model of computerization to help them understand which technologies are most appropriate and relevant…
2016-04-01
cheap, disposable swarms of robots that can accomplish these tasks quickly and with- out much human supervision. While there has been a lot of work...have shown that swarms of robots so dumb that they have no computational power–they can’t even add or subtract, and have no memory can still collec...behaviors can be achieved using swarms of computation-free robots . Our work starts with the simple robot model proposed in [6] and adds a form of
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Optimizaton of corrosion control for lead in drinking water using computational modeling techniques
Computational modeling techniques have been used to very good effect in the UK in the optimization of corrosion control for lead in drinking water. A “proof-of-concept” project with three US/CA case studies sought to demonstrate that such techniques could work equally well in the...
Learning General Phonological Rules from Distributional Information: A Computational Model
ERIC Educational Resources Information Center
Calamaro, Shira; Jarosz, Gaja
2015-01-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…
Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds
ERIC Educational Resources Information Center
Sun, Shaohui
2013-01-01
Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
NASA Technical Reports Server (NTRS)
Young, Gerald W.; Clemons, Curtis B.
2004-01-01
The focus of this Cooperative Agreement between the Computational Materials Laboratory (CML) of the Processing Science and Technology Branch of the NASA Glenn Research Center (GRC) and the Department of Theoretical and Applied Mathematics at The University of Akron was in the areas of system development of the CML workstation environment, modeling of microgravity and earth-based material processing systems, and joint activities in laboratory projects. These efforts complement each other as the majority of the modeling work involves numerical computations to support laboratory investigations. Coordination and interaction between the modelers, system analysts, and laboratory personnel are essential toward providing the most effective simulations and communication of the simulation results. Toward these means, The University of Akron personnel involved in the agreement worked at the Applied Mathematics Research Laboratory (AMRL) in the Department of Theoretical and Applied Mathematics while maintaining a close relationship with the personnel of the Computational Materials Laboratory at GRC. Network communication between both sites has been established. A summary of the projects we undertook during the time period 9/1/03 - 6/30/04 is included.
ERIC Educational Resources Information Center
Teo, T.; Lee, C. B.; Chai, C. S.
2008-01-01
Computers are increasingly widespread, influencing many aspects of our social and work lives. As we move into a technology-based society, it is important that classroom experiences with computers are made available for all students. The purpose of this study is to examine pre-service teachers' attitudes towards computers. This study extends the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Oishik, E-mail: oishik-sen@uiowa.edu; Gaul, Nicholas J., E-mail: nicholas-gaul@ramdosolutions.com; Choi, K.K., E-mail: kyung-choi@uiowa.edu
Macro-scale computations of shocked particulate flows require closure laws that model the exchange of momentum/energy between the fluid and particle phases. Closure laws are constructed in this work in the form of surrogate models derived from highly resolved mesoscale computations of shock-particle interactions. The mesoscale computations are performed to calculate the drag force on a cluster of particles for different values of Mach Number and particle volume fraction. Two Kriging-based methods, viz. the Dynamic Kriging Method (DKG) and the Modified Bayesian Kriging Method (MBKG) are evaluated for their ability to construct surrogate models with sparse data; i.e. using the leastmore » number of mesoscale simulations. It is shown that if the input data is noise-free, the DKG method converges monotonically; convergence is less robust in the presence of noise. The MBKG method converges monotonically even with noisy input data and is therefore more suitable for surrogate model construction from numerical experiments. This work is the first step towards a full multiscale modeling of interaction of shocked particle laden flows.« less
NASA Astrophysics Data System (ADS)
Sanchez, Beatriz; Santiago, Jose Luis; Martilli, Alberto; Martin, Fernando; Borge, Rafael; Quaassdorff, Christina; de la Paz, David
2017-08-01
Air quality management requires more detailed studies about air pollution at urban and local scale over long periods of time. This work focuses on obtaining the spatial distribution of NOx concentration averaged over several days in a heavily trafficked urban area in Madrid (Spain) using a computational fluid dynamics (CFD) model. A methodology based on weighted average of CFD simulations is applied computing the time evolution of NOx dispersion as a sequence of steady-state scenarios taking into account the actual atmospheric conditions. The inputs of emissions are estimated from the traffic emission model and the meteorological information used is derived from a mesoscale model. Finally, the computed concentration map correlates well with 72 passive samplers deployed in the research area. This work reveals the potential of using urban mesoscale simulations together with detailed traffic emissions so as to provide accurate maps of pollutant concentration at microscale using CFD simulations.
Integrating Cloud-Computing-Specific Model into Aircraft Design
NASA Astrophysics Data System (ADS)
Zhimin, Tian; Qi, Lin; Guangwen, Yang
Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.
Reverse logistics system planning for recycling computers hardware: A case study
NASA Astrophysics Data System (ADS)
Januri, Siti Sarah; Zulkipli, Faridah; Zahari, Siti Meriam; Shamsuri, Siti Hajar
2014-09-01
This paper describes modeling and simulation of reverse logistics networks for collection of used computers in one of the company in Selangor. The study focuses on design of reverse logistics network for used computers recycling operation. Simulation modeling, presented in this work allows the user to analyze the future performance of the network and to understand the complex relationship between the parties involved. The findings from the simulation suggest that the model calculates processing time and resource utilization in a predictable manner. In this study, the simulation model was developed by using Arena simulation package.
A 4-cylinder Stirling engine computer program with dynamic energy equations
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Lorenzo, C. F.
1983-01-01
A computer program for simulating the steady state and transient performance of a four cylinder Stirling engine is presented. The thermodynamic model includes both continuity and energy equations and linear momentum terms (flow resistance). Each working space between the pistons is broken into seven control volumes. Drive dynamics and vehicle load effects are included. The model contains 70 state variables. Also included in the model are piston rod seal leakage effects. The computer program includes a model of a hydrogen supply system, from which hydrogen may be added to the system to accelerate the engine. Flow charts are provided.
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koziol, Quincey
The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
The control of a manipulator by a computer model of the cerebellum.
NASA Technical Reports Server (NTRS)
Albus, J. S.
1973-01-01
Extension of previous work by Albus (1971, 1972) on the theory of cerebellar function to an application of a computer model of the cerebellum to manipulator control. Following a discussion of the cerebellar function and of a perceptron analogy of the cerebellum, particularly in regard to learning, an electromechanical model of the cerebellum is considered in the form of an IBM 1800 computer connected to a Rancho Los Amigos arm with seven degrees of freedom. It is shown that the computer memory makes it possible to train the arm on some representative sample of the universe of possible states and to achieve satisfactory performance.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem
Ning, Jianguo; Li, Yanmei; Yu, Wen
2015-01-01
Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
Beyond Theory: Improving Public Relations Writing through Computer Technology.
ERIC Educational Resources Information Center
Neff, Bonita Dostal
Computer technology (primarily word processing) enables the student of public relations writing to improve the writing process through increased flexibility in writing, enhanced creativity, increased support of management skills and team work. A new instructional model for computer use in public relations courses at Purdue University Calumet…
Brain-Based Devices for Neuromorphic Computer Systems
2013-07-01
and Deco, G. (2012). Effective Visual Working Memory Capacity: An Emergent Effect from the Neural Dynamics in an Attractor Network. PLoS ONE 7, e42719...models, apply them to a recognition task, and to demonstrate a working memory . In the course of this work a new analytical method for spiking data was...4 3.4 Spiking Neural Model Simulation of Working Memory ..................................... 5 3.5 A Novel Method for Analysis
Work and power analysis of the golf swing.
Nesbit, Steven M; Serrano, Monika
2005-12-01
A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key PointsFull-Body Model of the golf swing.Energy analysis of the golf swing.Work of the body joints dDuring the golf swing.Comparisons of subject work and power characteristics.
Work and Power Analysis of the Golf Swing
Nesbit, Steven M.; Serrano, Monika
2005-01-01
A work and power (energy) analysis of the golf swing is presented as a method for evaluating the mechanics of the golf swing. Two computer models were used to estimate the energy production, transfers, and conversions within the body and the golf club by employing standard methods of mechanics to calculate work of forces and torques, kinetic energies, strain energies, and power during the golf swing. A detailed model of the golf club determined the energy transfers and conversions within the club during the downswing. A full-body computer model of the golfer determined the internal work produced at the body joints during the downswing. Four diverse amateur subjects were analyzed and compared using these two models. The energy approach yielded new information on swing mechanics, determined the force and torque components that accelerated the club, illustrated which segments of the body produced work, determined the timing of internal work generation, measured swing efficiencies, calculated shaft energy storage and release, and proved that forces and range of motion were equally important in developing club head velocity. A more comprehensive description of the downswing emerged from information derived from an energy based analysis. Key Points Full-Body Model of the golf swing. Energy analysis of the golf swing. Work of the body joints dDuring the golf swing. Comparisons of subject work and power characteristics. PMID:24627666
Harris, Courtenay; Straker, Leon; Pollock, Clare; Smith, Anne
2015-01-01
Children's computer use is rapidly growing, together with reports of related musculoskeletal outcomes. Models and theories of adult-related risk factors demonstrate multivariate risk factors associated with computer use. Children's use of computers is different from adult's computer use at work. This study developed and tested a child-specific model demonstrating multivariate relationships between musculoskeletal outcomes, computer exposure and child factors. Using pathway modelling, factors such as gender, age, television exposure, computer anxiety, sustained attention (flow), socio-economic status and somatic complaints (headache and stomach pain) were found to have effects on children's reports of musculoskeletal symptoms. The potential for children's computer exposure to follow a dose-response relationship was also evident. Developing a child-related model can assist in understanding risk factors for children's computer use and support the development of recommendations to encourage children to use this valuable resource in educational, recreational and communication environments in a safe and productive manner. Computer use is an important part of children's school and home life. Application of this developed model, that encapsulates related risk factors, enables practitioners, researchers, teachers and parents to develop strategies that assist young people to use information technology for school, home and leisure in a safe and productive manner.
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
Taming Many-Parameter BSM Models with Bayesian Neural Networks
NASA Astrophysics Data System (ADS)
Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.
2017-09-01
The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.
Selective updating of working memory content modulates meso-cortico-striatal activity.
Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S
2011-08-01
Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.
Modeling individual differences in working memory performance: a source activation account
Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.
2008-01-01
Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
Bayesian Latent Class Analysis Tutorial.
Li, Yuelin; Lord-Bessen, Jennifer; Shiyko, Mariya; Loeb, Rebecca
2018-01-01
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Development of PIMAL: Mathematical Phantom with Moving Arms and Legs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akkurt, Hatice; Eckerman, Keith F.
2007-05-01
The computational model of the human anatomy (phantom) has gone through many revisions since its initial development in the 1970s. The computational phantom model currently used by the Nuclear Regulatory Commission (NRC) is based on a model published in 1974. Hence, the phantom model used by the NRC staff was missing some organs (e.g., neck, esophagus) and tissues. Further, locations of some organs were inappropriate (e.g., thyroid).Moreover, all the computational phantoms were assumed to be in the vertical-upright position. However, many occupational radiation exposures occur with the worker in other positions. In the first phase of this work, updates onmore » the computational phantom models were reviewed and a revised phantom model, which includes the updates for the relevant organs and compositions, was identified. This revised model was adopted as the starting point for this development work, and hence a series of radiation transport computations, using the Monte Carlo code MCNP5, was performed. The computational results were compared against values reported by the International Commission on Radiation Protection (ICRP) in Publication 74. For some of the organs (e.g., thyroid), there were discrepancies between the computed values and the results reported in ICRP-74. The reasons behind these discrepancies have been investigated and are discussed in this report.Additionally, sensitivity computations were performed to determine the sensitivity of the organ doses for certain parameters, including composition and cross sections used in the simulations. To assess the dose for more realistic exposure configurations, the phantom model was revised to enable flexible positioning of the arms and legs. Furthermore, to reduce the user time for analyses, a graphical user interface (GUI) was developed. The GUI can be used to visualize the positioning of the arms and legs as desired posture is achieved to generate the input file, invoke the computations, and extract the organ dose values from the MCNP5 output file. In this report, the main features of the phantom model with moving arms and legs and user interface are described.« less
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
This presentation will cover work at EPA under the CSS program for: (1) Virtual Tissue Models built from the known biology of an embryological system and structured to recapitulate key cell signals and responses; (2) running the models with real (in vitro) or synthetic (in silico...
Development of a Cross-Flow Fan Rotor for Vertical Take-Off and Landing Aircraft
2013-06-01
ANSYS CFX , along with the commercial computer-aided design software SolidWorks, was used to model and perform a parametric study on the number of rotor...the results found using ANSYS CFX . The experimental and analytical models were successfully compared at speeds ranging from 4,000 to 7,000 RPM...will make vertical take-off possible. The commercial computational fluid dynamics software ANSYS CFX , along with the commercial computer-aided design
Computational modeling of the cell-autonomous mammalian circadian oscillator.
Podkolodnaya, Olga A; Tverdokhleb, Natalya N; Podkolodnyy, Nikolay L
2017-02-24
This review summarizes various mathematical models of cell-autonomous mammalian circadian clock. We present the basics necessary for understanding of the cell-autonomous mammalian circadian oscillator, modern experimental data essential for its reconstruction and some special problems related to the validation of mathematical circadian oscillator models. This work compares existing mathematical models of circadian oscillator and the results of the computational studies of the oscillating systems. Finally, we discuss applications of the mathematical models of mammalian circadian oscillator for solving specific problems in circadian rhythm biology.
Dynamic modeling of parallel robots for computed-torque control implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Codourey, A.
1998-12-01
In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less
A Suggested Model for a Working Cyberschool.
ERIC Educational Resources Information Center
Javid, Mahnaz A.
2000-01-01
Suggests a model for a working cyberschool based on a case study of Kamiak Cyberschool (Washington), a technology-driven public high school. Topics include flexible hours; one-to-one interaction with teachers; a supportive school environment; use of computers, interactive media, and online resources; and self-paced, project-based learning.…
Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov
2014-01-01
Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...
NASA Technical Reports Server (NTRS)
Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.
2017-01-01
In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.
2000-04-19
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Longitudinal train dynamics: an overview
NASA Astrophysics Data System (ADS)
Wu, Qing; Spiryagin, Maksym; Cole, Colin
2016-12-01
This paper discusses the evolution of longitudinal train dynamics (LTD) simulations, which covers numerical solvers, vehicle connection systems, air brake systems, wagon dumper systems and locomotives, resistance forces and gravitational components, vehicle in-train instabilities, and computing schemes. A number of potential research topics are suggested, such as modelling of friction, polymer, and transition characteristics for vehicle connection simulations, studies of wagon dumping operations, proper modelling of vehicle in-train instabilities, and computing schemes for LTD simulations. Evidence shows that LTD simulations have evolved with computing capabilities. Currently, advanced component models that directly describe the working principles of the operation of air brake systems, vehicle connection systems, and traction systems are available. Parallel computing is a good solution to combine and simulate all these advanced models. Parallel computing can also be used to conduct three-dimensional long train dynamics simulations.
User's manual for the REEDM (Rocket Exhaust Effluent Diffusion Model) computer program
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.; Dumbauld, R. K.; Cheney, C. S.; Geary, H. V.
1982-01-01
The REEDM computer program predicts concentrations, dosages, and depositions downwind from normal and abnormal launches of rocket vehicles at NASA's Kennedy Space Center. The atmospheric dispersion models, cloud-rise models, and other formulas used in the REEDM model are described mathematically Vehicle and source parameters, other pertinent physical properties of the rocket exhaust cloud, and meteorological layering techniques are presented as well as user's instructions for REEDM. Worked example problems are included.
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
Computer Simulation of an Electric Trolley Bus
DOT National Transportation Integrated Search
1979-12-01
This report describes a computer model developed at the Transportation Systems Center (TSC) to simulate power/propulsion characteristics of an urban trolley bus. The work conducted in this area is sponsored by the Urban Mass Transportation Administra...
Introducing Managers to Expert Systems.
ERIC Educational Resources Information Center
Finlay, Paul N.; And Others
1991-01-01
Describes a short course to expose managers to expert systems, consisting of (1) introductory lecture; (2) supervised computer tutorial; (3) lecture and discussion about knowledge structuring and modeling; and (4) small group work on a case study using computers. (SK)
Investigation of computational aeroacoustic tools for noise predictions of wind turbine aerofoils
NASA Astrophysics Data System (ADS)
Humpf, A.; Ferrer, E.; Munduate, X.
2007-07-01
In this work trailing edge noise levels of a research aerofoil have been computed and compared to aeroacoustic measurements using two different approaches. On the other hand, aerodynamic and aeroacoustic calculations were performed with the full Navier-Stokes CFD code Fluent [Fluent Inc 2005 Fluent 6.2 Users Guide, Lebanon, NH, USA] on the basis of a steady RANS simulation. Aerodynamic characteristics were computed by the aid of various turbulence models. By the combined usage of implemented broadband noise source models, it was tried to isolate and determine the trailing edge noise level. Throughout this work two methods of different computational cost have been tested and quantitative and qualitative results obtained. On the one hand, the semi-empirical noise prediction tool NAFNoise [Moriarty P 2005 NAFNoise User's Guide. Golden, Colorado, July. http://wind.nrel.gov/designcodes/ simulators/NAFNoise] was used to directly predict trailing edge noise by taking into consideration the nature of the experiments.
Towards Image Documentation of Grave Coverings and Epitaphs for Exhibition Purposes
NASA Astrophysics Data System (ADS)
Pomaska, G.; Dementiev, N.
2015-08-01
Epitaphs and memorials as immovable items in sacred spaces provide with their inscriptions valuable documents of history. Today not only photography or photos are suitable as presentation material for cultural assets in museums. Computer vision and photogrammetry provide methods for recording, 3D modelling, rendering under artificial light conditions as well as further options for analysis and investigation of artistry. For exhibition purposes epitaphs have been recorded by the structure from motion method. A comparison of different kinds of SFM software distributions could be worked out. The suitability of open source software in the mesh processing chain from modelling up to displaying on computer monitors should be answered. Raspberry Pi, a computer in SoC technology works as a media server under Linux applying Python scripts. Will the little computer meet the requirements for a museum and is the handling comfortable enough for staff and visitors? This contribution reports about the case study.
Statistics, Computation, and Modeling in Cosmology
NASA Astrophysics Data System (ADS)
Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology
2017-01-01
Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and computationally efficiently explore the Galacticus parameter space. The group will also use the Galacticus simulations to study the relationship between the topological and physical structure of the halo merger trees and the properties of the resulting galaxies.
NASA Astrophysics Data System (ADS)
Erdt, Marius; Sakas, Georgios
2010-03-01
This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.
A novel patient-specific model to compute coronary fractional flow reserve.
Kwon, Soon-Sung; Chung, Eui-Chul; Park, Jin-Seo; Kim, Gook-Tae; Kim, Jun-Woo; Kim, Keun-Hong; Shin, Eun-Seok; Shim, Eun Bo
2014-09-01
The fractional flow reserve (FFR) is a widely used clinical index to evaluate the functional severity of coronary stenosis. A computer simulation method based on patients' computed tomography (CT) data is a plausible non-invasive approach for computing the FFR. This method can provide a detailed solution for the stenosed coronary hemodynamics by coupling computational fluid dynamics (CFD) with the lumped parameter model (LPM) of the cardiovascular system. In this work, we have implemented a simple computational method to compute the FFR. As this method uses only coronary arteries for the CFD model and includes only the LPM of the coronary vascular system, it provides simpler boundary conditions for the coronary geometry and is computationally more efficient than existing approaches. To test the efficacy of this method, we simulated a three-dimensional straight vessel using CFD coupled with the LPM. The computed results were compared with those of the LPM. To validate this method in terms of clinically realistic geometry, a patient-specific model of stenosed coronary arteries was constructed from CT images, and the computed FFR was compared with clinically measured results. We evaluated the effect of a model aorta on the computed FFR and compared this with a model without the aorta. Computationally, the model without the aorta was more efficient than that with the aorta, reducing the CPU time required for computing a cardiac cycle to 43.4%. Copyright © 2014. Published by Elsevier Ltd.
Working Memory and Decision-Making in a Frontoparietal Circuit Model
2017-01-01
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental “building blocks” of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. PMID:29114071
Working Memory and Decision-Making in a Frontoparietal Circuit Model.
Murray, John D; Jaramillo, Jorge; Wang, Xiao-Jing
2017-12-13
Working memory (WM) and decision-making (DM) are fundamental cognitive functions involving a distributed interacting network of brain areas, with the posterior parietal cortex (PPC) and prefrontal cortex (PFC) at the core. However, the shared and distinct roles of these areas and the nature of their coordination in cognitive function remain poorly understood. Biophysically based computational models of cortical circuits have provided insights into the mechanisms supporting these functions, yet they have primarily focused on the local microcircuit level, raising questions about the principles for distributed cognitive computation in multiregional networks. To examine these issues, we developed a distributed circuit model of two reciprocally interacting modules representing PPC and PFC circuits. The circuit architecture includes hierarchical differences in local recurrent structure and implements reciprocal long-range projections. This parsimonious model captures a range of behavioral and neuronal features of frontoparietal circuits across multiple WM and DM paradigms. In the context of WM, both areas exhibit persistent activity, but, in response to intervening distractors, PPC transiently encodes distractors while PFC filters distractors and supports WM robustness. With regard to DM, the PPC module generates graded representations of accumulated evidence supporting target selection, while the PFC module generates more categorical responses related to action or choice. These findings suggest computational principles for distributed, hierarchical processing in cortex during cognitive function and provide a framework for extension to multiregional models. SIGNIFICANCE STATEMENT Working memory and decision-making are fundamental "building blocks" of cognition, and deficits in these functions are associated with neuropsychiatric disorders such as schizophrenia. These cognitive functions engage distributed networks with prefrontal cortex (PFC) and posterior parietal cortex (PPC) at the core. It is not clear, however, what the contributions of PPC and PFC are in light of the computations that subserve working memory and decision-making. We constructed a biophysical model of a reciprocally connected frontoparietal circuit that revealed shared and distinct functions for the PFC and PPC across working memory and decision-making tasks. Our parsimonious model connects circuit-level properties to cognitive functions and suggests novel design principles beyond those of local circuits for cognitive processing in multiregional brain networks. Copyright © 2017 the authors 0270-6474/17/3712167-20$15.00/0.
Reduction of collisional-radiative models for transient, atomic plasmas
NASA Astrophysics Data System (ADS)
Abrantes, Richard June; Karagozian, Ann; Bilyeu, David; Le, Hai
2017-10-01
Interactions between plasmas and any radiation field, whether by lasers or plasma emissions, introduce many computational challenges. One of these computational challenges involves resolving the atomic physics, which can influence other physical phenomena in the radiated system. In this work, a collisional-radiative (CR) model with reduction capabilities is developed to capture the atomic physics at a reduced computational cost. Although the model is made with any element in mind, the model is currently supplemented by LANL's argon database, which includes the relevant collisional and radiative processes for all of the ionic stages. Using the detailed data set as the true solution, reduction mechanisms in the form of Boltzmann grouping, uniform grouping, and quasi-steady-state (QSS), are implemented to compare against the true solution. Effects on the transient plasma stemming from the grouping methods are compared. Distribution A: Approved for public release; unlimited distribution, PA (Public Affairs) Clearance Number 17449. This work was supported by the Air Force Office of Scientific Research (AFOSR), Grant Number 17RQCOR463 (Dr. Jason Marshall).
Are Academic Programs Adequate for the Software Profession?
ERIC Educational Resources Information Center
Koster, Alexis
2010-01-01
According to the Bureau of Labor Statistics, close to 1.8 million people, or 77% of all computer professionals, were working in the design, development, deployment, maintenance, and management of software in 2006. The ACM [Association for Computing Machinery] model curriculum for the BS in computer science proposes that about 42% of the core body…
Development of surrogate models for the prediction of the flow around an aircraft propeller
NASA Astrophysics Data System (ADS)
Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros
2018-05-01
In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.
Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G
2006-01-01
The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.
Modeling Effects of RNA on Capsid Assembly Pathways via Coarse-Grained Stochastic Simulation
Smith, Gregory R.; Xie, Lu; Schwartz, Russell
2016-01-01
The environment of a living cell is vastly different from that of an in vitro reaction system, an issue that presents great challenges to the use of in vitro models, or computer simulations based on them, for understanding biochemistry in vivo. Virus capsids make an excellent model system for such questions because they typically have few distinct components, making them amenable to in vitro and modeling studies, yet their assembly can involve complex networks of possible reactions that cannot be resolved in detail by any current experimental technology. We previously fit kinetic simulation parameters to bulk in vitro assembly data to yield a close match between simulated and real data, and then used the simulations to study features of assembly that cannot be monitored experimentally. The present work seeks to project how assembly in these simulations fit to in vitro data would be altered by computationally adding features of the cellular environment to the system, specifically the presence of nucleic acid about which many capsids assemble. The major challenge of such work is computational: simulating fine-scale assembly pathways on the scale and in the parameter domains of real viruses is far too computationally costly to allow for explicit models of nucleic acid interaction. We bypass that limitation by applying analytical models of nucleic acid effects to adjust kinetic rate parameters learned from in vitro data to see how these adjustments, singly or in combination, might affect fine-scale assembly progress. The resulting simulations exhibit surprising behavioral complexity, with distinct effects often acting synergistically to drive efficient assembly and alter pathways relative to the in vitro model. The work demonstrates how computer simulations can help us understand how assembly might differ between the in vitro and in vivo environments and what features of the cellular environment account for these differences. PMID:27244559
Studies on Vapor Adsorption Systems
NASA Technical Reports Server (NTRS)
Shamsundar, N.; Ramotowski, M.
1998-01-01
The project consisted of performing experiments on single and dual bed vapor adsorption systems, thermodynamic cycle optimization, and thermal modeling. The work was described in a technical paper that appeared in conference proceedings and a Master's thesis, which were previously submitted to NASA. The present report describes some additional thermal modeling work done subsequently, and includes listings of computer codes developed during the project. Recommendations for future work are provided.
Dopamine selectively remediates ‘model-based’ reward learning: a computational approach
Sharp, Madeleine E.; Foerde, Karin; Daw, Nathaniel D.
2016-01-01
Patients with loss of dopamine due to Parkinson’s disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from ‘model-free’ learning. The other, ‘model-based’ learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson’s disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson’s disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson’s disease may be related to an inability to pursue reward based on complete representations of the environment. PMID:26685155
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Kulhánek, Tomáš; Ježek, Filip; Mateják, Marek; Šilar, Jan; Kofránek, Jří
2015-08-01
This work introduces experiences of teaching modeling and simulation for graduate students in the field of biomedical engineering. We emphasize the acausal and object-oriented modeling technique and we have moved from teaching block-oriented tool MATLAB Simulink to acausal and object oriented Modelica language, which can express the structure of the system rather than a process of computation. However, block-oriented approach is allowed in Modelica language too and students have tendency to express the process of computation. Usage of the exemplar acausal domains and approach allows students to understand the modeled problems much deeper. The causality of the computation is derived automatically by the simulation tool.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
DOT National Transportation Integrated Search
1976-01-01
This report provides a detailed explanation of the inner workings of the computer program AIRPOL-4A, a computer model for predicting the impact of highway generated air pollution. The report is intended to serve both as a supportive document for AIRP...
1994-03-01
Specification and Network Time Protocol(NTP) over the Implementation. RFC-o 119, Network OSI Remote Operations Service. RFC- Working Group, September...approximately ISIS implements a powerful model of 94% of the total computation time. distributed computation known as modelo Timing results are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith
Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying themore » event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.« less
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
Phase Field Modeling of Microstructure Development in Microgravity
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Goldenfeld, Nigel
2001-01-01
This newly funded project seeks to extend our NASA-sponsored project on modeling of dendritic microstructures to facilitate collaboration between our research group and those of other NASA investigators. In our ongoing program, we have applied advanced computational techniques to study microstructural evolution in dendritic solidification, for both pure isolated dendrites and directionally solidified alloys. This work has enabled us to compute dendritic microstructures using both realistic material parameters and experimentally relevant processing conditions, thus allowing for the first time direct comparison of phase field computations with laboratory observations. This work has been well received by the materials science and physics communities, and has led to several opportunities for collaboration with scientists working on experimental investigations of pattern selection and segregation in solidification. While we have been able to pursue these collaborations to a limited extent, with some important findings, this project focuses specifically on those collaborations. We have two target collaborations: with Prof. Glicksman's group working on the Isothermal Dendritic Growth Experiment (IDGE), and with Prof. Poirier's group studying directional solidification in Pb-Sb alloys. These two space experiments match well with our two thrusts in modeling, one for pure materials, as in the IDGE, and the other directional solidification. Such collaboration will benefit all of the research groups involved, and will provide for rapid dissemination of the results of our work where it will have significant impact.
NASA Astrophysics Data System (ADS)
Guimaraes, Cayley; Antunes, Diego R.; de F. Guilhermino Trindade, Daniela; da Silva, Rafaella A. Lopes; Garcia, Laura Sanchez
This work presents a computational model (XML) of the Brazilian Sign Language (Libras), based on its phonology. The model was used to create a sample of representative signs to aid the recording of a base of videos whose aim is to support the development of tools to support genuine social inclusion of the deaf.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevrekidis, Ioannis G.
The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.
Methodical and technological aspects of creation of interactive computer learning systems
NASA Astrophysics Data System (ADS)
Vishtak, N. M.; Frolov, D. A.
2017-01-01
The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.
Simulation of solid-liquid flows in a stirred bead mill based on computational fluid dynamics (CFD)
NASA Astrophysics Data System (ADS)
Winardi, S.; Widiyastuti, W.; Septiani, E. L.; Nurtono, T.
2018-05-01
The selection of simulation model is an important step in computational fluid dynamics (CFD) to obtain an agreement with experimental work. In addition, computational time and processor speed also influence the performance of the simulation results. Here, we report the simulation of solid-liquid flow in a bead mill using Eulerian model. Multiple Reference Frame (MRF) was also used to model the interaction between moving (shaft and disk) and stationary (chamber exclude shaft and disk) zones. Bead mill dimension was based on the experimental work of Yamada and Sakai (2013). The effect of shaft rotation speed of 1200 and 1800 rpm on the particle distribution and the flow field was discussed. For rotation speed of 1200 rpm, the particles spread evenly throughout the bead mill chamber. On the other hand, for the rotation speed of 1800 rpm, the particles tend to be thrown to the near wall region resulting in the dead zone and found no particle in the center region. The selected model agreed well to the experimental data with average discrepancies less than 10%. Furthermore, the simulation was run without excessive computational cost.
NASA Astrophysics Data System (ADS)
Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.
2018-05-01
Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers ( M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.
NASA Astrophysics Data System (ADS)
Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.
2018-02-01
Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers (M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.
Overview of heat transfer and fluid flow problem areas encountered in Stirling engine modeling
NASA Technical Reports Server (NTRS)
Tew, Roy C., Jr.
1988-01-01
NASA Lewis Research Center has been managing Stirling engine development programs for over a decade. In addition to contractual programs, this work has included in-house engine testing and development of engine computer models. Attempts to validate Stirling engine computer models with test data have demonstrated that engine thermodynamic losses need better characterization. Various Stirling engine thermodynamic losses and efforts that are underway to characterize these losses are discussed.
Pirolli, Peter
2016-08-01
Computational models were developed in the ACT-R neurocognitive architecture to address some aspects of the dynamics of behavior change. The simulations aim to address the day-to-day goal achievement data available from mobile health systems. The models refine current psychological theories of self-efficacy, intended effort, and habit formation, and provide an account for the mechanisms by which goal personalization, implementation intentions, and remindings work.
Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations
NASA Astrophysics Data System (ADS)
Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik
2017-02-01
The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.
Colour computer-generated holography for point clouds utilizing the Phong illumination model.
Symeonidou, Athanasia; Blinder, David; Schelkens, Peter
2018-04-16
A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.
ERIC Educational Resources Information Center
Simmering, Vanessa R.; Patterson, Rebecca
2012-01-01
Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…
Low Speed Rot or/Fuselage Interactional Aerodynamics
NASA Technical Reports Server (NTRS)
Barnwell, Richard W.; Prichard, Devon S.
2003-01-01
This report presents work performed under a Cooperative Research Agreement between Virginia Tech and the NASA Langley Research Center. The work involved development of computational techniques for modeling helicopter rotor/airframe aerodynamic interaction. A brief overview of the problem is presented, the modeling techniques are described, and selected example calculations are briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geveci, Berk; Maynard, Robert
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less
NASA Astrophysics Data System (ADS)
Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.
2017-01-01
Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.
a Discrete Mathematical Model to Simulate Malware Spreading
NASA Astrophysics Data System (ADS)
Del Rey, A. Martin; Sánchez, G. Rodriguez
2012-10-01
With the advent and worldwide development of Internet, the study and control of malware spreading has become very important. In this sense, some mathematical models to simulate malware propagation have been proposed in the scientific literature, and usually they are based on differential equations exploiting the similarities with mathematical epidemiology. The great majority of these models study the behavior of a particular type of malware called computer worms; indeed, to the best of our knowledge, no model has been proposed to simulate the spreading of a computer virus (the traditional type of malware which differs from computer worms in several aspects). In this sense, the purpose of this work is to introduce a new mathematical model not based on continuous mathematics tools but on discrete ones, to analyze and study the epidemic behavior of computer virus. Specifically, cellular automata are used in order to design such model.
Materials constitutive models for nonlinear analysis of thermally cycled structures
NASA Technical Reports Server (NTRS)
Kaufman, A.; Hunt, L. E.
1982-01-01
Effects of inelastic materials models on computed stress-strain solutions for thermally loaded structures were studied by performing nonlinear (elastoplastic creep) and elastic structural analyses on a prismatic, double edge wedge specimen of IN 100 alloy that was subjected to thermal cycling in fluidized beds. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic kinematic, and combined plus transient creep) were exercised for the problem by using the MARC nonlinear, finite element computer program. Maximum total strain ranges computed from the elastic and nonlinear analyses agreed within 5 percent. Mean cyclic stresses, inelastic strain ranges, and inelastic work were significantly affected by the choice of inelastic constitutive model. The computing time per cycle for the nonlinear analyses was more than five times that required for the elastic analysis.
On the Impact of Execution Models: A Case Study in Computational Chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram
2015-05-25
Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less
Basic Modeling of the Solar Atmosphere and Spectrum
NASA Technical Reports Server (NTRS)
Avrett, Eugene H.; Wagner, William J. (Technical Monitor)
2000-01-01
During the last three years we have continued the development of extensive computer programs for constructing realistic models of the solar atmosphere and for calculating detailed spectra to use in the interpretation of solar observations. This research involves two major interrelated efforts: work by Avrett and Loeser on the Pandora computer program for optically thick non-LTE modeling of the solar atmosphere including a wide range of physical processes, and work by Kurucz on the detailed high-resolution synthesis of the solar spectrum using data for over 58 million atomic and molecular lines. Our objective is to construct atmospheric models from which the calculated spectra agree as well as possible with high-and low-resolution observations over a wide wavelength range. Such modeling leads to an improved understanding of the physical processes responsible for the structure and behavior of the atmosphere.
Center for Extended Magnetohydrodynamic Modeling Cooperative Agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl R. Sovinec
The Center for Extended Magnetohydrodynamic Modeling (CEMM) is developing computer simulation models for predicting the behavior of magnetically confined plasmas. Over the first phase of support from the Department of Energy’s Scientific Discovery through Advanced Computing (SciDAC) initiative, the focus has been on macroscopic dynamics that alter the confinement properties of magnetic field configurations. The ultimate objective is to provide computational capabilities to predict plasma behavior—not unlike computational weather prediction—to optimize performance and to increase the reliability of magnetic confinement for fusion energy. Numerical modeling aids theoretical research by solving complicated mathematical models of plasma behavior including strong nonlinear effectsmore » and the influences of geometrical shaping of actual experiments. The numerical modeling itself remains an area of active research, due to challenges associated with simulating multiple temporal and spatial scales. The research summarized in this report spans computational and physical topics associated with state of the art simulation of magnetized plasmas. The tasks performed for this grant are categorized according to whether they are primarily computational, algorithmic, or application-oriented in nature. All involve the development and use of the Non-Ideal Magnetohydrodynamics with Rotation, Open Discussion (NIMROD) code, which is described at http://nimrodteam.org. With respect to computation, we have tested and refined methods for solving the large algebraic systems of equations that result from our numerical approximations of the physical model. Collaboration with the Terascale Optimal PDE Solvers (TOPS) SciDAC center led us to the SuperLU_DIST software library [http://crd.lbl.gov/~xiaoye/SuperLU/] for solving large sparse matrices using direct methods on parallel computers. Switching to this solver library boosted NIMROD’s performance by a factor of five in typical large nonlinear simulations, which has been publicized as a success story of SciDAC-fostered collaboration. Furthermore, the SuperLU software does not assume any mathematical symmetry, and its generality provides an important capability for extending the physical model beyond magnetohydrodynamics (MHD). With respect to algorithmic and model development, our most significant accomplishment is the development of a new method for solving plasma models that treat electrons as an independent plasma component. These ‘two-fluid’ models encompass MHD and add temporal and spatial scales that are beyond the response of the ion species. Implementation and testing of a previously published algorithm did not prove successful for NIMROD, and the new algorithm has since been devised, analyzed, and implemented. Two-fluid modeling, an important objective of the original NIMROD project, is now routine in 2D applications. Algorithmic components for 3D modeling are in place and tested; though, further computational work is still needed for efficiency. Other algorithmic work extends the ion-fluid stress tensor to include models for parallel and gyroviscous stresses. In addition, our hot-particle simulation capability received important refinements that permitted completion of a benchmark with the M3D code. A highlight of our applications work is the edge-localized mode (ELM) modeling, which was part of the first-ever computational Performance Target for the DOE Office of Fusion Energy Science, see http://www.science.doe.gov/ofes/performancetargets.shtml. Our efforts allowed MHD simulations to progress late into the nonlinear stage, where energy is conducted to the wall location. They also produced a two-fluid ELM simulation starting from experimental information and demonstrating critical drift effects that are characteristic of two-fluid physics. Another important application is the internal kink mode in a tokamak. Here, the primary purpose of the study has been to benchmark the two main code development lines of CEMM, NIMROD and M3D, on a relevant nonlinear problem. Results from the two codes show repeating nonlinear relaxation events driven by the kink mode over quantitatively comparable timescales. The work has launched a more comprehensive nonlinear benchmarking exercise, where realistic transport effects have an important role.« less
NASA Astrophysics Data System (ADS)
Fleury, Gérard; Mistrot, Pierre
2006-12-01
While driving off-road vehicles, operators are exposed to whole-body vibration acting in the fore-and-aft direction. Seat manufacturers supply products equipped with fore-and-aft suspension but only a few studies report on their performance. This work proposes a computational approach to design fore-and-aft suspensions for wheel loader seats. Field tests were conducted in a quarry to analyse the nature of vibration to which the driver was exposed. Typical input signals were recorded to be reproduced in the laboratory. Technical specifications are defined for the suspension. In order to evaluate the suspension vibration attenuation performance, a model of a sitting human body was developed and coupled to a seat model. The seat model combines the models of each suspension component. A linear two-degree-of-freedom model is used to describe the dynamic behaviour of the sitting driver. Model parameters are identified by fitting the computed apparent mass frequency response functions to the measured values. Model extensions are proposed to investigate postural effects involving variations in hands and feet positions and interaction of the driver's back with the backrest. Suspension design parameters are firstly optimized by computing the seat/man model response to sinusoidal acceleration. Four criteria including transmissibility, interaction force between the driver's back and the backrest and relative maximal displacement of the suspension are computed. A new suspension design with optimized features is proposed. Its performance is checked from calculations of the response of the seat/man model subjected to acceleration measured on the wheel loader during real work conditions. On the basis of the computed values of the SEAT factors, it is found possible to design a suspension that would increase the attenuation provided by the seat by a factor of two.
ERIC Educational Resources Information Center
Bucks, Gregory Warren
2010-01-01
Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how…
Functional structure and dynamics of the human nervous system
NASA Technical Reports Server (NTRS)
Lawrence, J. A.
1981-01-01
The status of an effort to define the directions needed to take in extending pilot models is reported. These models are needed to perform closed-loop (man-in-the-loop) feedback flight control system designs and to develop cockpit display requirements. The approach taken is to develop a hypothetical working model of the human nervous system by reviewing the current literature in neurology and psychology and to develop a computer model of this hypothetical working model.
DOT National Transportation Integrated Search
2006-01-01
The project focuses on two major issues - the improvement of current work zone design practices and an analysis of : vehicle interarrival time (IAT) and speed distributions for the development of a digital computer simulation model for : queues and t...
NASA Astrophysics Data System (ADS)
Reich, Felix A.; Rickert, Wilhelm; Müller, Wolfgang H.
2018-03-01
This study investigates the implications of various electromagnetic force models in macroscopic situations. There is an ongoing academic discussion which model is "correct," i.e., generally applicable. Often, gedankenexperiments with light waves or photons are used in order to motivate certain models. In this work, three problems with bodies at the macroscopic scale are used for computing theoretical model-dependent predictions. Two aspects are considered, total forces between bodies and local deformations. By comparing with experimental data, insight is gained regarding the applicability of the models. First, the total force between two cylindrical magnets is computed. Then a spherical magnetostriction problem is considered to show different deformation predictions. As a third example focusing on local deformations, a droplet of silicone oil in castor oil is considered, placed in a homogeneous electric field. By using experimental data, some conclusions are drawn and further work is motivated.
Cloud Computing for radiologists.
Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit
2012-07-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.
Cloud Computing for radiologists
Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit
2012-01-01
Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future. PMID:23599560
Initial comparison of single cylinder Stirling engine computer model predictions with test results
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.; Thieme, L. G.; Miao, D.
1979-01-01
A NASA developed digital computer code for a Stirling engine, modelling the performance of a single cylinder rhombic drive ground performance unit (GPU), is presented and its predictions are compared to test results. The GPU engine incorporates eight regenerator/cooler units and the engine working space is modelled by thirteen control volumes. The model calculates indicated power and efficiency for a given engine speed, mean pressure, heater and expansion space metal temperatures and cooler water inlet temperature and flow rate. Comparison of predicted and observed powers implies that the reference pressure drop calculations underestimate actual pressure drop, possibly due to oil contamination in the regenerator/cooler units, methane contamination in the working gas or the underestimation of mechanical loss. For a working gas of hydrogen, the predicted values of brake power are from 0 to 6% higher than experimental values, and brake efficiency is 6 to 16% higher, while for helium the predicted brake power and efficiency are 2 to 15% higher than the experimental.
Computer assisted surgery with 3D robot models and visualisation of the telesurgical action.
Rovetta, A
2000-01-01
This paper deals with the support of virtual reality computer action in the procedures of surgical robotics. Computer support gives a direct representation of the surgical theatre. The modelization of the procedure in course and in development gives a psychological reaction towards safety and reliability. Robots similar to the ones used by the manufacturing industry can be used with little modification as very effective surgical tools. They have high precision, repeatability and are versatile in integrating with the medical instrumentation. Now integrated surgical rooms, with computer and robot-assisted intervention, are operating. The computer is the element for a decision taking aid, and the robot works as a very effective tool.
Recruitment of Foreigners in the Market for Computer Scientists in the United States
Bound, John; Braga, Breno; Golden, Joseph M.
2016-01-01
We present and calibrate a dynamic model that characterizes the labor market for computer scientists. In our model, firms can recruit computer scientists from recently graduated college students, from STEM workers working in other occupations or from a pool of foreign talent. Counterfactual simulations suggest that wages for computer scientists would have been 2.8–3.8% higher, and the number of Americans employed as computers scientists would have been 7.0–13.6% higher in 2004 if firms could not hire more foreigners than they could in 1994. In contrast, total CS employment would have been 3.8–9.0% lower, and consequently output smaller. PMID:27170827
A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand
2017-11-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Lee, Hyeonjong; Paek, Janghyun; Noh, Kwantae; Kwon, Kung-Rock
2017-08-21
Reproducing soft tissue contours around a pontic area is important for the fabrication of an esthetic prosthesis, especially in the anterior area. A gingival model that precisely replicates the soft tissue structure around the pontic area can be easily obtained by taking a pick-up impression of an interim fixed dental prosthesis. After a working cast is fabricated using the customary technique, the pick-up model is superimposed onto the working model for the pontic area using computer-aided design and manufacturing (CAD/CAM). A definitive restoration using this technique would be well adapted to the pontic base, which is formed by the interim prosthesis. © 2017 by the American College of Prosthodontists.
From Occasional Choices to Inevitable Musts: A Computational Model of Nicotine Addiction
Metin, Selin; Sengor, N. Serap
2012-01-01
Although, there are considerable works on the neural mechanisms of reward-based learning and decision making, and most of them mention that addiction can be explained by malfunctioning in these cognitive processes, there are very few computational models. This paper focuses on nicotine addiction, and a computational model for nicotine addiction is proposed based on the neurophysiological basis of addiction. The model compromises different levels ranging from molecular basis to systems level, and it demonstrates three different possible behavioral patterns which are addict, nonaddict, and indecisive. The dynamical behavior of the proposed model is investigated with tools used in analyzing nonlinear dynamical systems, and the relation between the behavioral patterns and the dynamics of the system is discussed. PMID:23251144
A Multi-Fidelity Surrogate Model for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S."bala"
2016-11-01
The explosive dispersal of particles is an example of a complex multiphase and multi-species fluid flow problem. This problem has many engineering applications including particle-laden explosives. In these flows, the detonation products of the explosive cannot be treated as a perfect gas so a real gas equation of state is used to close the governing equations (unlike air, which uses the ideal gas equation for closure). As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both of the state equations must be satisfied. One of the more accurate, yet computationally expensive, methods to deal with this is a scheme that iterates between the two equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work strives to create a multi-fidelity surrogate model of this process. We then study the performance of the model with respect to the iterative method by performing both gas-only and particle laden flow simulations using an Eulerian-Lagrangian approach with a finite volume code. Specifically, the model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel modeling approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models
Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.
2016-01-01
We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Influence of ionization on the Gupta and on the Park chemical models
NASA Astrophysics Data System (ADS)
Morsa, Luigi; Zuppardi, Gennaro
2014-12-01
This study is an extension of former works by the present authors, in which the influence of the chemical models by Gupta and by Park was evaluated on thermo-fluid-dynamic parameters in the flow field, including transport coefficients, related characteristic numbers and heat flux on two current capsules (EXPERT and Orion) during the high altitude re-entry path. The results verified that the models, even computing different air compositions in the flow field, compute only slight different compositions on the capsule surface, therefore the difference in the heat flux is not very relevant. In the above mentioned studies, ionization was neglected because the velocities of the capsules (about 5000 m/s for EXPERT and about 7600 m/s for Orion) were not high enough to activate meaningful ionization. The aim of the present work is to evaluate the incidence of ionization, linked to the chemical models by Gupta and by Park, on both heat flux and thermo fluid-dynamic parameters. The present computer tests were carried out by a direct simulation Monte Carlo code (DS2V) in the velocity interval 7600-12000 m/s, considering only the Orion capsule at an altitude of 85 km. The results verified what already found namely when ionization is not considered, the chemical models compute only a slight different gas composition in the core of the shock wave and practically the same composition on the surface therefore the same heat flux. On the opposite, the results verified that when ionization is considered, the chemical models compute different compositions in the whole shock layer and on the surface therefore different heat flux. The analysis of the results relies on a qualitative and a quantitative evaluation of the effects of ionization on both chemical models. The main result of the study is that when ionization is taken into account, the Park model is more reactive than the Gupta model; consequently, the heat flux computed by Park is lower than the one computed by Gupta; using the Gupta model, in the design of a thermal protection system, is recommended.
Chande, Ruchi D; Wayne, Jennifer S
2017-09-01
Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Dopamine selectively remediates 'model-based' reward learning: a computational approach.
Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna
2016-02-01
Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Kessler, Aaron M.; Stein, Mary Kay; Schunn, Christian D.
2015-01-01
Model tracing tutors represent a technology designed to mimic key elements of one-on-one human tutoring. We examine the situations in which such supportive computer technologies may devolve into mindless student work with little conceptual understanding or student development. To analyze the support of student intellectual work in the model…
Advances in Engineering Software for Lift Transportation Systems
NASA Astrophysics Data System (ADS)
Kazakoff, Alexander Borisoff
2012-03-01
In this paper an attempt is performed at computer modelling of ropeway ski lift systems. The logic in these systems is based on a travel form between the two terminals, which operates with high capacity cabins, chairs, gondolas or draw-bars. Computer codes AUTOCAD, MATLAB and Compaq-Visual Fortran - version 6.6 are used in the computer modelling. The rope systems computer modelling is organized in two stages in this paper. The first stage is organization of the ground relief profile and a design of the lift system as a whole, according to the terrain profile and the climatic and atmospheric conditions. The ground profile is prepared by the geodesists and is presented in an AUTOCAD view. The next step is the design of the lift itself which is performed by programmes using the computer code MATLAB. The second stage of the computer modelling is performed after the optimization of the co-ordinates and the lift profile using the computer code MATLAB. Then the co-ordinates and the parameters are inserted into a program written in Compaq Visual Fortran - version 6.6., which calculates 171 lift parameters, organized in 42 tables. The objective of the work presented in this paper is an attempt at computer modelling of the design and parameters derivation of the rope way systems and their computer variation and optimization.
Summary Report of Working Group 2: Computation
NASA Astrophysics Data System (ADS)
Stoltz, P. H.; Tsung, R. S.
2009-01-01
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.
Summary Report of Working Group 2: Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoltz, P. H.; Tsung, R. S.
2009-01-22
The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less
Macías-Díaz, J E; Macías, Siegfried; Medina-Ramírez, I E
2013-12-01
In this manuscript, we present a computational model to approximate the solutions of a partial differential equation which describes the growth dynamics of microbial films. The numerical technique reported in this work is an explicit, nonlinear finite-difference methodology which is computationally implemented using Newton's method. Our scheme is compared numerically against an implicit, linear finite-difference discretization of the same partial differential equation, whose computer coding requires an implementation of the stabilized bi-conjugate gradient method. Our numerical results evince that the nonlinear approach results in a more efficient approximation to the solutions of the biofilm model considered, and demands less computer memory. Moreover, the positivity of initial profiles is preserved in the practice by the nonlinear scheme proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Computer modeling of batteries from nonlinear circuit elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waaben, S.; Dyer, C.K.; Federico, J.
1985-06-01
Circuit analogs for a single battery cell have previously been composed of resistors, capacitors, and inductors. This work introduces a nonlinear circuit model for cell behavior. The circuit is configured around the PIN junction diode, whose charge-storage behavior has features similar to those of electrochemical cells. A user-friendly integrated circuit simulation computer program has reproduced a variety of complex cell responses including electrica isolation effects causing capacity loss, as well as potentiodynamic peaks and discharge phenomena hitherto thought to be thermodynamic in origin. However, in this work, they are shown to be simply due to spatial distribution of stored chargemore » within a practical electrode.« less
Computer investigations of the turbulent flow around a NACA2415 airfoil wind turbine
NASA Astrophysics Data System (ADS)
Driss, Zied; Chelbi, Tarek; Abid, Mohamed Salah
2015-12-01
In this work, computer investigations are carried out to study the flow field developing around a NACA2415 airfoil wind turbine. The Navier-Stokes equations in conjunction with the standard k-ɛ turbulence model are considered. These equations are solved numerically to determine the local characteristics of the flow. The models tested are implemented in the software "SolidWorks Flow Simulation" which uses a finite volume scheme. The numerical results are compared with experiments conducted on an open wind tunnel to validate the numerical results. This will help improving the aerodynamic efficiency in the design of packaged installations of the NACA2415 airfoil type wind turbine.
Deblurring for spatial and temporal varying motion with optical computing
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Xue, Dongfeng; Hui, Zhao
2016-05-01
A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.
Improved Spectral Calculations for Discrete Schrődinger Operators
NASA Astrophysics Data System (ADS)
Puelz, Charles
This work details an O(n2) algorithm for computing spectra of discrete Schrődinger operators with periodic potentials. Spectra of these objects enhance our understanding of fundamental aperiodic physical systems and contain rich theoretical structure of interest to the mathematical community. Previous work on the Harper model led to an O(n2) algorithm relying on properties not satisfied by other aperiodic operators. Physicists working with the Fibonacci Hamiltonian, a popular quasicrystal model, have instead used a problematic dynamical map approach or a sluggish O(n3) procedure for their calculations. The algorithm presented in this work, a blend of well-established eigenvalue/vector algorithms, provides researchers with a more robust computational tool of general utility. Application to the Fibonacci Hamiltonian in the sparsely studied intermediate coupling regime reveals structure in canonical coverings of the spectrum that will prove useful in motivating conjectures regarding band combinatorics and fractal dimensions.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Contrasting single and multi-component working-memory systems in dual tasking.
Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels
2016-05-01
Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system. We report a behavioral and an fMRI dataset in which working memory requirements are manipulated during multitasking. We show that a computational cognitive model that assumes a distributed version of working memory accounts for both behavioral and neuroimaging data better than a model that takes a more centralized approach. The model's working memory consists of an attentional focus, declarative memory, and a subvocalized rehearsal mechanism. Thus, the data and model favor an account where working memory interference in dual tasking is the result of interactions between different resources that together form a working-memory system. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Computer modeling of photodegradation
NASA Technical Reports Server (NTRS)
Guillet, J.
1986-01-01
A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.
Parallel stochastic simulation of macroscopic calcium currents.
González-Vélez, Virginia; González-Vélez, Horacio
2007-06-01
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
Nonlinear Real-Time Optical Signal Processing
1990-09-01
pattern recognition. Additional work concerns the relationship of parallel computation paradigms to optical computing and halftone screen techniques...paradigms to optical computing and halftone screen techniques for implementing general nonlinear functions. 3\\ 2 Research Progress This section...Vol. 23, No. 8, pp. 34-57, 1986. 2.4 Nonlinear Optical Processing with Halftones : Degradation and Compen- sation Models This paper is concerned with
NASA Astrophysics Data System (ADS)
Ribeiro, Allan; Santos, Helen
With the advent of new information and communication technologies (ICTs), the communicative interaction changes the way of being and acting of people, at the same time that changes the way of work activities related to education. In this range of possibilities provided by the advancement of computational resources include virtual reality (VR) and augmented reality (AR), are highlighted as new forms of information visualization in computer applications. While the RV allows user interaction with a virtual environment totally computer generated; in RA the virtual images are inserted in real environment, but both create new opportunities to support teaching and learning in formal and informal contexts. Such technologies are able to express representations of reality or of the imagination, as systems in nanoscale and low dimensionality, being imperative to explore, in the most diverse areas of knowledge, the potential offered by ICT and emerging technologies. In this sense, this work presents computer applications of virtual and augmented reality developed with the use of modeling and simulation in computational approaches to topics related to nanoscience and nanotechnology, and articulated with innovative pedagogical practices.
Techniques for Computation of Frequency Limited H∞ Norm
NASA Astrophysics Data System (ADS)
Haider, Shafiq; Ghafoor, Abdul; Imran, Muhammad; Fahad Mumtaz, Malik
2018-01-01
Traditional H ∞ norm depicts peak system gain over infinite frequency range, but many applications like filter design, model order reduction and controller design etc. require computation of peak system gain over specific frequency interval rather than infinite range. In present work, new computationally efficient techniques for computation of H ∞ norm over frequency limited interval are proposed. Proposed techniques link norm computation with maximum singular value of the system in limited frequency interval. Numerical examples are incorporated to validate the proposed concept.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
Syllabus Computer in Astronomy
NASA Astrophysics Data System (ADS)
Hojaev, Alisher S.
2015-08-01
One of the most important and actual subjects and training courses in the curricula for undergraduate level students at the National university of Uzbekistan is ‘Computer Methods in Astronomy’. It covers two semesters and includes both lecture and practice classes. Based on the long term experience we prepared the tutorial for students which contain the description of modern computer applications in astronomy.The main directions of computer application in field of astronomy briefly as follows:1) Automating the process of observation, data acquisition and processing2) Create and store databases (the results of observations, experiments and theoretical calculations) their generalization, classification and cataloging, working with large databases3) The decisions of the theoretical problems (physical modeling, mathematical modeling of astronomical objects and phenomena, derivation of model parameters to obtain a solution of the corresponding equations, numerical simulations), appropriate software creation4) The utilization in the educational process (e-text books, presentations, virtual labs, remote education, testing), amateur astronomy and popularization of the science5) The use as a means of communication and data transfer, research result presenting and dissemination (web-journals), the creation of a virtual information system (local and global computer networks).During the classes the special attention is drawn on the practical training and individual work of students including the independent one.
Computer Based Simulation of Laboratory Experiments.
ERIC Educational Resources Information Center
Edward, Norrie S.
1997-01-01
Examines computer based simulations of practical laboratory experiments in engineering. Discusses the aims and achievements of lab work (cognitive, process, psychomotor, and affective); types of simulations (model building and behavioral); and the strengths and weaknesses of simulations. Describes the development of a centrifugal pump simulation,…
NASA Astrophysics Data System (ADS)
Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.; Lehner, Christoph; Lytle, Andrew T.
2017-10-01
We compute the renormalisation factors ( Z-matrices) of the Δ F = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with n f = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to \\overline{MS} , the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used to compute the BSM neutral kaon mixing matrix elements in the companion paper [1]. We argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.
NASA Astrophysics Data System (ADS)
Neves, Rui Gomes; Teodoro, Vítor Duarte
2012-09-01
A teaching approach aiming at an epistemologically balanced integration of computational modelling in science and mathematics education is presented. The approach is based on interactive engagement learning activities built around computational modelling experiments that span the range of different kinds of modelling from explorative to expressive modelling. The activities are designed to make a progressive introduction to scientific computation without requiring prior development of a working knowledge of programming, generate and foster the resolution of cognitive conflicts in the understanding of scientific and mathematical concepts and promote performative competency in the manipulation of different and complementary representations of mathematical models. The activities are supported by interactive PDF documents which explain the fundamental concepts, methods and reasoning processes using text, images and embedded movies, and include free space for multimedia enriched student modelling reports and teacher feedback. To illustrate, an example from physics implemented in the Modellus environment and tested in undergraduate university general physics and biophysics courses is discussed.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Exact posterior computation in non-conjugate Gaussian location-scale parameters models
NASA Astrophysics Data System (ADS)
Andrade, J. A. A.; Rathie, P. N.
2017-12-01
In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.
Feasibility of MHD submarine propulsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doss, E.D.; Sikes, W.C.
1992-09-01
This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Next Generation Transport Phenomenology Model
NASA Technical Reports Server (NTRS)
Strickland, Douglas J.; Knight, Harold; Evans, J. Scott
2004-01-01
This report describes the progress made in Quarter 3 of Contract Year 3 on the development of Aeronomy Phenomenology Modeling Tool (APMT), an open-source, component-based, client-server architecture for distributed modeling, analysis, and simulation activities focused on electron and photon transport for general atmospheres. In the past quarter, column emission rate computations were implemented in Java, preexisting Fortran programs for computing synthetic spectra were embedded into APMT through Java wrappers, and work began on a web-based user interface for setting input parameters and running the photoelectron and auroral electron transport models.
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Reducing usage of the computational resources by event driven approach to model predictive control
NASA Astrophysics Data System (ADS)
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
Polarization Considerations for the Laser Interferometer Space Antenna
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Pedersen, Tracy R.; McNamara, Paul
2005-01-01
A polarization ray trace model of the Laser Interferometer Space Antenna s (LISA) optical path is being created. The model will be able to assess the effects of various polarizing elements and the optical coatings on the required, very long path length, picometer level dynamic interferometry. The computational steps are described. This should eliminate any ambiguities associated with polarization ray tracing of interferometers and provide a basis for determining the computer model s limitations and serve as a clearly defined starting point for future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
Fundamentals of Modeling, Data Assimilation, and High-performance Computing
NASA Technical Reports Server (NTRS)
Rood, Richard B.
2005-01-01
This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.
Using computer software to improve group decision-making.
Mockler, R J; Dologite, D G
1991-08-01
This article provides a review of some of the work done in the area of knowledge-based systems for strategic planning. Since 1985, with the founding of the Center for Knowledge-based Systems for Business Management, the project has focused on developing knowledge-based systems (KBS) based on these models. In addition, the project also involves developing a variety of computer and non-computer methods and techniques for assisting both technical and non-technical managers and individuals to do decision modelling and KBS development. This paper presents a summary of one segment of the project: a description of integrative groupware useful in strategic planning. The work described here is part of an ongoing research project. As part of this project, for example, over 200 non-technical and technical business managers, most of them working full-time during the project, developed over 160 KBS prototype systems in conjunction with MBA course in strategic planning and management decision making. Based on replies to a survey of this test group, 28 per cent of the survey respondents reported their KBS were used at work, 21 per cent reportedly received promotions, pay rises or new jobs based on their KBS development work, and 12 per cent reported their work led to participation in other KBS development projects at work. All but two of the survey respondents reported that their work on the KBS development project led to a substantial increase in their job knowledge or performance.
Quantum lattice model solver HΦ
NASA Astrophysics Data System (ADS)
Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki
2017-08-01
HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.
Lee, Seungcheol Austin; Liang, Yuhua Jake
2015-04-01
Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.
Carpal tunnel syndrome and computer exposure at work in two large complementary cohorts.
Mediouni, Z; Bodin, J; Dale, A M; Herquelot, E; Carton, M; Leclerc, A; Fouquet, N; Dumontier, C; Roquelaure, Y; Evanoff, B A; Descatha, A
2015-09-09
The boom in computer use and concurrent high rates in musculoskeletal complaints and carpal tunnel syndrome (CTS) among users have led to a controversy about a possible link. Most studies have used cross-sectional designs and shown no association. The present study used longitudinal data from two large complementary cohorts to evaluate a possible relationship between CTS and the performance of computer work. The Cosali cohort is a representative sample of a French working population that evaluated CTS using standardised clinical examinations and assessed self-reported computer use. The PrediCTS cohort study enrolled newly hired clerical, service and construction workers in several industries in the USA, evaluated CTS using symptoms and nerve conduction studies (NCS), and estimated exposures to computer work using a job exposure matrix. During a follow-up of 3-5 years, the association between new cases of CTS and computer work was calculated using logistic regression models adjusting for sex, age, obesity and relevant associated disorders. In the Cosali study, 1551 workers (41.8%) completed follow-up physical examinations; 36 (2.3%) participants were diagnosed with CTS. In the PrediCTS study, 711 workers (64.2%) completed follow-up evaluations, whereas 31 (4.3%) had new cases of CTS. The adjusted OR for the group with the highest exposure to computer use was 0.39 (0.17; 0.89) in the Cosali cohort and 0.16 (0.05; 0.59) in the PrediCTS cohort. Data from two large cohorts in two different countries showed no association between computer work and new cases of CTS among workers in diverse jobs with varying job exposures. CTS is far more common among workers in non-computer related jobs; prevention efforts and work-related compensation programmes should focus on workers performing forceful hand exertion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Carpal tunnel syndrome and computer exposure at work in two large complementary cohorts
Mediouni, Z; Bodin, J; Dale, A M; Herquelot, E; Carton, M; Leclerc, A; Fouquet, N; Dumontier, C; Roquelaure, Y; Evanoff, B A; Descatha, A
2015-01-01
Objectives The boom in computer use and concurrent high rates in musculoskeletal complaints and carpal tunnel syndrome (CTS) among users have led to a controversy about a possible link. Most studies have used cross-sectional designs and shown no association. The present study used longitudinal data from two large complementary cohorts to evaluate a possible relationship between CTS and the performance of computer work. Settings and participants The Cosali cohort is a representative sample of a French working population that evaluated CTS using standardised clinical examinations and assessed self-reported computer use. The PrediCTS cohort study enrolled newly hired clerical, service and construction workers in several industries in the USA, evaluated CTS using symptoms and nerve conduction studies (NCS), and estimated exposures to computer work using a job exposure matrix. Primary and secondary outcome measures During a follow-up of 3–5 years, the association between new cases of CTS and computer work was calculated using logistic regression models adjusting for sex, age, obesity and relevant associated disorders. Results In the Cosali study, 1551 workers (41.8%) completed follow-up physical examinations; 36 (2.3%) participants were diagnosed with CTS. In the PrediCTS study, 711 workers (64.2%) completed follow-up evaluations, whereas 31 (4.3%) had new cases of CTS. The adjusted OR for the group with the highest exposure to computer use was 0.39 (0.17; 0.89) in the Cosali cohort and 0.16 (0.05; 0.59) in the PrediCTS cohort. Conclusions Data from two large cohorts in two different countries showed no association between computer work and new cases of CTS among workers in diverse jobs with varying job exposures. CTS is far more common among workers in non-computer related jobs; prevention efforts and work-related compensation programmes should focus on workers performing forceful hand exertion. PMID:26353869
Computational and fMRI Studies of Visualization
2009-03-31
spatial thinking in high level cognition, such as in problem-solving and reasoning. In conjunction with the experimental work, the project developed a...computational modeling system (4CAPS) as well as the development of 4CAPS models for particular tasks. The cognitive level of 4CAPS accounts for...neuroarchitecture to interpret and predict the brain activation in a network of cortical areas that underpin the performance of a visual thinking task. The
Spin-neurons: A possible path to energy-efficient neuromorphic computers
NASA Astrophysics Data System (ADS)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
2013-12-01
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and "thresholding" operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that "spin-neurons" (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.
Spin-neurons: A possible path to energy-efficient neuromorphic computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharad, Mrigank; Fan, Deliang; Roy, Kaushik
Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices.more » Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.« less
Khan, Asaduzzaman; Western, Mark
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Computer-Aided Communication Satellite System Analysis and Optimization.
ERIC Educational Resources Information Center
Stagl, Thomas W.; And Others
Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…
ERIC Educational Resources Information Center
Wheeler, David L.
1988-01-01
Scientists feel that progress in artificial intelligence and the availability of thousands of experimental results make this the right time to build and test theories on how people think and learn, using the computer to model minds. (MSE)
Modeling Spanish Mood Choice in Belief Statements
ERIC Educational Resources Information Center
Robinson, Jason R.
2013-01-01
This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…
Forms of the Materials Shared between a Teacher and a Pupil
ERIC Educational Resources Information Center
Klubal, Libor; Kostolányová, Katerina
2016-01-01
Methods of using ICT is hereby amended. We merge from the original model of work on one computer to the model of cloud services and mobile touch screen devices use. Way of searching for and delivering of information between a pupil and a teacher is closely related with this matter as well. This work detects common and preferred procedures of…
ERIC Educational Resources Information Center
Rodriguez, Luis J.; Torres, M. Ines
2006-01-01
Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…
NASA Astrophysics Data System (ADS)
Zakharova, Natalia; Piskovatsky, Nicolay; Gusev, Anatoly
2014-05-01
Development of Informational-Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in: mathematical modeling; theory of adjoint equations and optimal control; inverse problems; numerical methods theory; numerical algebra and scientific computing. The above problems are studied in the Institute of Numerical Mathematics of the Russian Academy of Science (INM RAS) in ICS for personal computers. In this work the results on the Special data base development for ICS "INM RAS - Black Sea" are presented. In the presentation the input information for ICS is discussed, some special data processing procedures are described. In this work the results of forecast using ICS "INM RAS - Black Sea" with operational observation data assimilation are presented. This study was supported by the Russian Foundation for Basic Research (project No 13-01-00753) and by Presidium Program of Russian Academy of Sciences (project P-23 "Black sea as an imitational ocean model"). References 1. V.I. Agoshkov, M.V. Assovskii, S.A. Lebedev, Numerical simulation of Black Sea hydrothermodynamics taking into account tide-forming forces. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 5-31. 2. E.I. Parmuzin, V.I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 69-94. 3. V.B. Zalesny, N.A. Diansky, V.V. Fomin, S.N. Moshonkin, S.G. Demyshev, Numerical model of the circulation of Black Sea and Sea of Azov. Russ. J. Numer. Anal. Math. Modelling (2012) 27, No.1, pp. 95-111. 4. Agoshkov V.I.,Assovsky M.B., Giniatulin S. V., Zakharova N.B., Kuimov G.V., Parmuzin E.I., Fomin V.V. Informational Computational system of variational assimilation of observation data "INM RAS - Black sea"// Ecological safety of coastal and shelf zones and complex use of shelf resources: Collection of scientific works. Issue 26, Volume 2. - National Academy of Sciences of Ukraine, Marine Hydrophysical Institute, Sebastopol, 2012. Pages 352-360. (In russian)
Hybrid RANS-LES using high order numerical methods
NASA Astrophysics Data System (ADS)
Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael
2017-11-01
Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
NASA Technical Reports Server (NTRS)
Bradley, D. B.; Irwin, J. D.
1974-01-01
A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.
The visual system’s internal model of the world
Lee, Tai Sing
2015-01-01
The Bayesian paradigm has provided a useful conceptual theory for understanding perceptual computation in the brain. While the detailed neural mechanisms of Bayesian inference are not fully understood, recent computational and neurophysiological works have illuminated the underlying computational principles and representational architecture. The fundamental insights are that the visual system is organized as a modular hierarchy to encode an internal model of the world, and that perception is realized by statistical inference based on such internal model. In this paper, I will discuss and analyze the varieties of representational schemes of these internal models and how they might be used to perform learning and inference. I will argue for a unified theoretical framework for relating the internal models to the observed neural phenomena and mechanisms in the visual cortex. PMID:26566294
Study of the stability of a SEIRS model for computer worm propagation
NASA Astrophysics Data System (ADS)
Hernández Guillén, J. D.; Martín del Rey, A.; Hernández Encinas, L.
2017-08-01
Nowadays, malware is the most important threat to information security. In this sense, several mathematical models to simulate malware spreading have appeared. They are compartmental models where the population of devices is classified into different compartments: susceptible, exposed, infectious, recovered, etc. The main goal of this work is to propose an improved SEIRS (Susceptible-Exposed-Infectious-Recovered-Susceptible) mathematical model to simulate computer worm propagation. It is a continuous model whose dynamic is ruled by means of a system of ordinary differential equations. It considers more realistic parameters related to the propagation; in fact, a modified incidence rate has been used. Moreover, the equilibrium points are computed and their local and global stability analyses are studied. From the explicit expression of the basic reproductive number, efficient control measures are also obtained.
Analytical approximation of the InGaZnO thin-film transistors surface potential
NASA Astrophysics Data System (ADS)
Colalongo, Luigi
2016-10-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of thin-film transistors, and in turn of indium gallium zinc oxide TFTs, available today. However, the need of iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not accurate enough in particular to model transconductances and transcapacitances. In this work we present an extremely accurate (in the range of nV) and computationally efficient non-iterative approximation of the surface potential that can serve as a basis for advanced surface-potential-based indium gallium zinc oxide TFTs models.
An algebraic interpretation of PSP composition.
Vaucher, G
1998-01-01
The introduction of time in artificial neurons is a delicate problem on which many groups are working. Our approach combines some properties of biological models and the algebraic properties of McCulloch and Pitts artificial neuron (AN) (McCulloch and Pitts, 1943) to produce a new model which links both characteristics. In this extended artificial neuron, postsynaptic potentials (PSPs) are considered as numerical elements, having two degrees of freedom, on which the neuron computes operations. Modelled in this manner, a group of neurons can be seen as a computer with an asynchronous architecture. To formalize the functioning of this computer, we propose an algebra of impulses. This approach might also be interesting in the modelling of the passive electrical properties in some biological neurons.
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
Computational Modeling of Multi-Scale Material Features in Cement Paste - An Overview
2015-05-25
and concrete ; though commonly used are one of the most complex in terms of material morphology and structure than most materials, for example...across the multiple scales are required. In this paper, recent work from our research group on the nano to continuum level modeling of cementitious...of our research work consisting of, • Molecular Dynamics (MD) modeling for the nano scale features of the cementitious material chemistry. • Micro
Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for
from Colorado School of Mines. His research interests include optical modeling, computational fluid dynamics, and heat transfer. His work involves optical performance modeling of concentrating solar power experience includes developing thermal and optical models of CSP components at Norwich Solar Technologies
Computational model for vocal tract dynamics in a suboscine bird.
Assaneo, M F; Trevisan, M A
2010-09-01
In a recent work, active use of the vocal tract has been reported for singing oscines. The reconfiguration of the vocal tract during song serves to match its resonances to the syringeal fundamental frequency, demonstrating a precise coordination of the two main pieces of the avian vocal system for songbirds characterized by tonal songs. In this work we investigated the Great Kiskadee (Pitangus sulfuratus), a suboscine bird whose calls display a rich harmonic content. Using a recently developed mathematical model for the syrinx and a mobile vocal tract, we set up a computational model that provides a plausible reconstruction of the vocal tract movement using a few spectral features taken from the utterances. Moreover, synthetic calls were generated using the articulated vocal tract that accounts for all the acoustical features observed experimentally.
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
Giese, Martin A; Rizzolatti, Giacomo
2015-10-07
Action recognition has received enormous interest in the field of neuroscience over the last two decades. In spite of this interest, the knowledge in terms of fundamental neural mechanisms that provide constraints for underlying computations remains rather limited. This fact stands in contrast with a wide variety of speculative theories about how action recognition might work. This review focuses on new fundamental electrophysiological results in monkeys, which provide constraints for the detailed underlying computations. In addition, we review models for action recognition and processing that have concrete mathematical implementations, as opposed to conceptual models. We think that only such implemented models can be meaningfully linked quantitatively to physiological data and have a potential to narrow down the many possible computational explanations for action recognition. In addition, only concrete implementations allow judging whether postulated computational concepts have a feasible implementation in terms of realistic neural circuits. Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
Computer Programs For Automated Welding System
NASA Technical Reports Server (NTRS)
Agapakis, John E.
1993-01-01
Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.
Supersonic reacting internal flowfields
NASA Astrophysics Data System (ADS)
Drummond, J. P.
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
Supersonic reacting internal flow fields
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
1989-01-01
The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.
Cholinergic modulation of cognitive processing: insights drawn from computational models
Newman, Ehren L.; Gupta, Kishan; Climer, Jason R.; Monaghan, Caitlin K.; Hasselmo, Michael E.
2012-01-01
Acetylcholine plays an important role in cognitive function, as shown by pharmacological manipulations that impact working memory, attention, episodic memory, and spatial memory function. Acetylcholine also shows striking modulatory influences on the cellular physiology of hippocampal and cortical neurons. Modeling of neural circuits provides a framework for understanding how the cognitive functions may arise from the influence of acetylcholine on neural and network dynamics. We review the influences of cholinergic manipulations on behavioral performance in working memory, attention, episodic memory, and spatial memory tasks, the physiological effects of acetylcholine on neural and circuit dynamics, and the computational models that provide insight into the functional relationships between the physiology and behavior. Specifically, we discuss the important role of acetylcholine in governing mechanisms of active maintenance in working memory tasks and in regulating network dynamics important for effective processing of stimuli in attention and episodic memory tasks. We also propose that theta rhythm plays a crucial role as an intermediary between the physiological influences of acetylcholine and behavior in episodic and spatial memory tasks. We conclude with a synthesis of the existing modeling work and highlight future directions that are likely to be rewarding given the existing state of the literature for both empiricists and modelers. PMID:22707936
Advanced Simulation and Computing Fiscal Year 2016 Implementation Plan, Version 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M.; Archer, B.; Hendrickson, B.
2015-08-27
The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The purpose of this IP is to outline key work requirements to be performed and to control individualmore » work activities within the scope of work. Contractors may not deviate from this plan without a revised WA or subsequent IP.« less
Computational principles of working memory in sentence comprehension.
Lewis, Richard L; Vasishth, Shravan; Van Dyke, Julie A
2006-10-01
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.
EVALUATION OF PHYSIOLOGY COMPUTER MODELS, AND THE FEASIBILITY OF THEIR USE IN RISK ASSESSMENT.
This project will evaluate the current state of quantitative models that simulate physiological processes, and the how these models might be used in conjunction with the current use of PBPK and BBDR models in risk assessment. The work will include a literature search to identify...
A Moderate Constructivist E-Learning Instructional Model Evaluated on Computer Specialists
ERIC Educational Resources Information Center
Alonso, Fernando; Manrique, Daniel; Vines, Jose M.
2009-01-01
This paper presents a novel instructional model for e-learning and an evaluation study to determine the effectiveness of this model for teaching Java language programming to information technology specialists working for the Spanish Public Administration. This is a general-purpose model that combines objectivist and constructivist learning…
Spatial Variation of Pressure in the Lyophilization Product Chamber Part 1: Computational Modeling.
Ganguly, Arnab; Varma, Nikhil; Sane, Pooja; Bogner, Robin; Pikal, Michael; Alexeenko, Alina
2017-04-01
The flow physics in the product chamber of a freeze dryer involves coupled heat and mass transfer at different length and time scales. The low-pressure environment and the relatively small flow velocities make it difficult to quantify the flow structure experimentally. The current work presents the three-dimensional computational fluid dynamics (CFD) modeling for vapor flow in a laboratory scale freeze dryer validated with experimental data and theory. The model accounts for the presence of a non-condensable gas such as nitrogen or air using a continuum multi-species model. The flow structure at different sublimation rates, chamber pressures, and shelf-gaps are systematically investigated. Emphasis has been placed on accurately predicting the pressure variation across the subliming front. At a chamber set pressure of 115 mtorr and a sublimation rate of 1.3 kg/h/m 2 , the pressure variation reaches about 9 mtorr. The pressure variation increased linearly with sublimation rate in the range of 0.5 to 1.3 kg/h/m 2 . The dependence of pressure variation on the shelf-gap was also studied both computationally and experimentally. The CFD modeling results are found to agree within 10% with the experimental measurements. The computational model was also compared to analytical solution valid for small shelf-gaps. Thus, the current work presents validation study motivating broader use of CFD in optimizing freeze-drying process and equipment design.
Support System Effects on the NASA Common Research Model
NASA Technical Reports Server (NTRS)
Rivers, S. Melissa B.; Hunter, Craig A.
2012-01-01
An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-Foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experimental and the computational data from the 4th Drag Prediction Workshop. This difference led to the present work. In this study, a computational assessment has been undertaken to investigate model support system interference effects on the Common Research Model. The configurations computed during this investigation were the wing/body/tail=0deg without the support system and the wing/body/tail=0deg with the support system. The results from this investigation confirm that the addition of the support system to the computational cases does shift the pitching moment in the direction of the experimental results.
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
Tangible Landscape: Cognitively Grasping the Flow of Water
NASA Astrophysics Data System (ADS)
Harmon, B. A.; Petrasova, A.; Petras, V.; Mitasova, H.; Meentemeyer, R. K.
2016-06-01
Complex spatial forms like topography can be challenging to understand, much less intentionally shape, given the heavy cognitive load of visualizing and manipulating 3D form. Spatiotemporal processes like the flow of water over a landscape are even more challenging to understand and intentionally direct as they are dependent upon their context and require the simulation of forces like gravity and momentum. This cognitive work can be offloaded onto computers through 3D geospatial modeling, analysis, and simulation. Interacting with computers, however, can also be challenging, often requiring training and highly abstract thinking. Tangible computing - an emerging paradigm of human-computer interaction in which data is physically manifested so that users can feel it and directly manipulate it - aims to offload this added cognitive work onto the body. We have designed Tangible Landscape, a tangible interface powered by an open source geographic information system (GRASS GIS), so that users can naturally shape topography and interact with simulated processes with their hands in order to make observations, generate and test hypotheses, and make inferences about scientific phenomena in a rapid, iterative process. Conceptually Tangible Landscape couples a malleable physical model with a digital model of a landscape through a continuous cycle of 3D scanning, geospatial modeling, and projection. We ran a flow modeling experiment to test whether tangible interfaces like this can effectively enhance spatial performance by offloading cognitive processes onto computers and our bodies. We used hydrological simulations and statistics to quantitatively assess spatial performance. We found that Tangible Landscape enhanced 3D spatial performance and helped users understand water flow.
Active technique by suction to control the flow structure over a van model
NASA Astrophysics Data System (ADS)
Harinaldi, Budiarso, Warjito, Kosasih, Engkos A.; Tarakka, Rustan; Simanungkalit, Sabar P.
2012-06-01
Today research trend in car aerodynamics are carried out from the point of view of the durable development. Some car companies have the objective to develop control solution that enable to reduce the aerodynamic drag of vehicle. It provides the possibility to modify the flow separation to reduce the development of the swirling structures around the vehicle. In this study, a family van is modeled with a modified form of Ahmed's body by changing the orientation of the flow from its original form (modified/reversed Ahmed Body). This model is equipped with a suction on the rear side to comprehensively examine the pressure field modifications that occur. The investigation combines computational and experimental work. The computational simulation used is k-epsilon flow turbulence model. The reversed Ahmed body used in the investigation has slant angle (φ) 35° at the front part. In the computational work, meshing type is tetra/hybrid element with hex core type and the grid number is more than 1.7 million in order to ensure detail discretization and more accurate calculation results. The boundary condition is upstream velocity of 11.1 m/s. Mean free stream at far upstream region is assumed in a steady state condition and uniform. The suction velocity is set at 1 m/s. Meanwhile in the experimental work a reversed Ahmed model is tested in a controlled wind tunnel experiments. The main measurement is the drag aerodynamic measurement at rear of the body of the model using strain gage. The results show that the application of a suction in the rear part of the van model give the effect of reducing the wake and the vortex is formed. Aerodynamic drag reduction close to 24% for the computational approach and 14.8% for the experimental approach by introducing a suction have been obtained.
Computational analysis of integrated biosensing and shear flow in a microfluidic vascular model
NASA Astrophysics Data System (ADS)
Wong, Jeremy F.; Young, Edmond W. K.; Simmons, Craig A.
2017-11-01
Fluid flow and flow-induced shear stress are critical components of the vascular microenvironment commonly studied using microfluidic cell culture models. Microfluidic vascular models mimicking the physiological microenvironment also offer great potential for incorporating on-chip biomolecular detection. In spite of this potential, however, there are few examples of such functionality. Detection of biomolecules released by cells under flow-induced shear stress is a significant challenge due to severe sample dilution caused by the fluid flow used to generate the shear stress, frequently to the extent where the analyte is no longer detectable. In this work, we developed a computational model of a vascular microfluidic cell culture model that integrates physiological shear flow and on-chip monitoring of cell-secreted factors. Applicable to multilayer device configurations, the computational model was applied to a bilayer configuration, which has been used in numerous cell culture applications including vascular models. Guidelines were established that allow cells to be subjected to a wide range of physiological shear stress while ensuring optimal rapid transport of analyte to the biosensor surface and minimized biosensor response times. These guidelines therefore enable the development of microfluidic vascular models that integrate cell-secreted factor detection while addressing flow constraints imposed by physiological shear stress. Ultimately, this work will result in the addition of valuable functionality to microfluidic cell culture models that further fulfill their potential as labs-on-chips.
A Conceptual Model for Analysing Collaborative Work and Products in Groupware Systems
NASA Astrophysics Data System (ADS)
Duque, Rafael; Bravo, Crescencio; Ortega, Manuel
Collaborative work using groupware systems is a dynamic process in which many tasks, in different application domains, are carried out. Currently, one of the biggest challenges in the field of CSCW (Computer-Supported Cooperative Work) research is to establish conceptual models which allow for the analysis of collaborative activities and their resulting products. In this article, we propose an ontology that conceptualizes the required elements which enable an analysis to infer a set of analysis indicators, thus evaluating both the individual and group work and the artefacts which are produced.
In this chapter we review the literature on scanning probe microscopy (SPM), virtual reality (VR), and computational chemistry and our earlier work dealing with modeling lignin, lignin-carbohydrate complexes (LCC), humic substances (HSs) and non-bonded organo-mineral interactions...
An Interactive Graphical Modeling Game for Teaching Musical Concepts.
ERIC Educational Resources Information Center
Lamb, Martin
1982-01-01
Describes an interactive computer game in which players compose music at a computer screen. They experiment with pitch and melodic shape and the effects of transposition, augmentation, diminution, retrograde, and inversion. The user interface is simple enough for children to use and powerful enough for composers to work with. (EAO)
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"Technology Enhanced Elementary and Middle School Science" ("TEEMSS") is a physical science curriculum for grades 3-8 that utilizes computers, sensors, and interactive models to support investigations of real-world phenomena. Through 15 inquiry-based instructional units, students interact with computers, gather and analyze…
Quantum Iterative Deepening with an Application to the Halting Problem
Tarrataca, Luís; Wichert, Andreas
2013-01-01
Classical models of computation traditionally resort to halting schemes in order to enquire about the state of a computation. In such schemes, a computational process is responsible for signaling an end of a calculation by setting a halt bit, which needs to be systematically checked by an observer. The capacity of quantum computational models to operate on a superposition of states requires an alternative approach. From a quantum perspective, any measurement of an equivalent halt qubit would have the potential to inherently interfere with the computation by provoking a random collapse amongst the states. This issue is exacerbated by undecidable problems such as the Entscheidungsproblem which require universal computational models, e.g. the classical Turing machine, to be able to proceed indefinitely. In this work we present an alternative view of quantum computation based on production system theory in conjunction with Grover's amplitude amplification scheme that allows for (1) a detection of halt states without interfering with the final result of a computation; (2) the possibility of non-terminating computation and (3) an inherent speedup to occur during computations susceptible of parallelization. We discuss how such a strategy can be employed in order to simulate classical Turing machines. PMID:23520465
Glass, Nancy; Hanson, Ginger C; Anger, W Kent; Laharnar, Naima; Campbell, Jacquelyn C; Weinstein, Marc; Perrin, Nancy
2017-07-01
The study examines the effectiveness of a workplace violence and harassment prevention and response program with female homecare workers in a consumer driven model of care. Homecare workers were randomized to either; computer based training (CBT only) or computer-based training with homecare worker peer facilitation (CBT + peer). Participants completed measures on confidence, incidents of violence, and harassment, health and work outcomes at baseline, 3, 6 months post-baseline. Homecare workers reported improved confidence to prevent and respond to workplace violence and harassment and a reduction in incidents of workplace violence and harassment in both groups at 6-month follow-up. A decrease in negative health and work outcomes associated with violence and harassment were not reported in the groups. CBT alone or with trained peer facilitation with homecare workers can increase confidence and reduce incidents of workplace violence and harassment in a consumer-driven model of care. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, H.K.; Novak, T.
2008-03-15
During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controlsmore » to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.« less
NASA Astrophysics Data System (ADS)
Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.
2016-07-01
The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Four PPPPerspectives on computational creativity in theory and in practice
NASA Astrophysics Data System (ADS)
Jordanous, Anna
2016-04-01
Computational creativity is the modelling, simulating or replicating of creativity computationally. In examining and learning from these "creative systems", from what perspective should the creativity of a system be considered? Are we interested in the creativity of the system's output? Or of its creative processes? Features of the system? Or how it operates within its environment? Traditionally computational creativity has focused more on creative systems' products or processes, though this focus has widened recently. Creativity research offers the Four Ps of creativity: Person/Producer, Product, Process and Press/Environment. This paper presents the Four Ps, explaining each in the context of creativity research and how it relates to computational creativity. To illustrate the usefulness of the Four Ps in taking broader perspectives on creativity in its computational treatment, the concepts of novelty and value are explored using the Four Ps, highlighting aspects of novelty and value that may otherwise be overlooked. Analysis of recent research in computational creativity finds that although each of the Four Ps appears in the body of computational creativity work, individual pieces of work often do not acknowledge all Four Ps, missing opportunities to widen their work's relevance. We can see, though, that high-status computational creativity papers do typically address all Four Ps. This paper argues that the broader views of creativity afforded by the Four Ps is vital in guiding us towards more comprehensively useful computational investigations of creativity.
Computational Cognitive Neuroscience Modeling of Sequential Skill Learning
2016-09-21
101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 09/21/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research ...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) The University of Texas at Austin 108 E Dean Keeton Stop A8000 Austin , TX ...AFRL-AFOSR-VA-TR-2016-0320 Computational Cognitive Neuroscience Modeling of Sequential Skill Learning David Schnyer UNIVERSITY OF TEXAS AT AUSTIN
Electromagnetic field computation at fractal dimensions
NASA Astrophysics Data System (ADS)
Zubair, M.; Ang, Y. S.; Ang, L. K.
According to Mandelbrot's work on fractals, many objects are in fractional dimensions that the traditional calculus or differential equations are not sufficient. Thus fractional models solving the relevant differential equations are critical to understand the physical dynamics of such objects. In this work, we develop computational electromagnetics or Maxwell equations in fractional dimensions. For a given degree of imperfection, impurity, roughness, anisotropy or inhomogeneity, we consider the complicated object can be formulated into a fractional dimensional continuous object characterized by an effective fractional dimension D, which can be calculated from a self-developed algorithm. With this non-integer value of D, we develop the computational methods to design and analyze the EM scattering problems involving rough surfaces or irregularities in an efficient framework. The fractional electromagnetic based model can be extended to other key differential equations such as Schrodinger or Dirac equations, which will be useful for design of novel 2D materials stacked up in complicated device configuration for applications in electronics and photonics. This work is supported by Singapore Temasek Laboratories (TL) Seed Grant (IGDS S16 02 05 1).
Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS
NASA Astrophysics Data System (ADS)
Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.
2016-12-01
We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).
Walters, Kevin
2012-08-07
In this paper we use approximate Bayesian computation to estimate the parameters in an immortal model of colonic stem cell division. We base the inferences on the observed DNA methylation patterns of cells sampled from the human colon. Utilising DNA methylation patterns as a form of molecular clock is an emerging area of research and has been used in several studies investigating colonic stem cell turnover. There is much debate concerning the two competing models of stem cell turnover: the symmetric (immortal) and asymmetric models. Early simulation studies concluded that the observed methylation data were not consistent with the immortal model. A later modified version of the immortal model that included preferential strand segregation was subsequently shown to be consistent with the same methylation data. Most of this earlier work assumes site independent methylation models that do not take account of the known processivity of methyltransferases whilst other work does not take into account the methylation errors that occur in differentiated cells. This paper addresses both of these issues for the immortal model and demonstrates that approximate Bayesian computation provides accurate estimates of the parameters in this neighbour-dependent model of methylation error rates. The results indicate that if colonic stem cells divide asymmetrically then colon stem cell niches are maintained by more than 8 stem cells. Results also indicate the possibility of preferential strand segregation and provide clear evidence against a site-independent model for methylation errors. In addition, algebraic expressions for some of the summary statistics used in the approximate Bayesian computation (that allow for the additional variation arising from cell division in differentiated cells) are derived and their utility discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.
Tao Song; Linqiang Pan
2015-06-01
Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.
Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.
Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania
2015-01-01
This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.
Roadmap for cardiovascular circulation model
Bradley, Christopher P.; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R.; Omholt, Stig W.; Chase, J. Geoffrey; Müller, Lucas O.; Watanabe, Sansuke M.; Blanco, Pablo J.; de Bono, Bernard; Hunter, Peter J.
2016-01-01
Abstract Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well‐established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo‐skeletal system. The computational infrastructure for the cardiovascular model should provide for near real‐time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. PMID:27506597
Roadmap for cardiovascular circulation model.
Safaei, Soroush; Bradley, Christopher P; Suresh, Vinod; Mithraratne, Kumar; Muller, Alexandre; Ho, Harvey; Ladd, David; Hellevik, Leif R; Omholt, Stig W; Chase, J Geoffrey; Müller, Lucas O; Watanabe, Sansuke M; Blanco, Pablo J; de Bono, Bernard; Hunter, Peter J
2016-12-01
Computational models of many aspects of the mammalian cardiovascular circulation have been developed. Indeed, along with orthopaedics, this area of physiology is one that has attracted much interest from engineers, presumably because the equations governing blood flow in the vascular system are well understood and can be solved with well-established numerical techniques. Unfortunately, there have been only a few attempts to create a comprehensive public domain resource for cardiovascular researchers. In this paper we propose a roadmap for developing an open source cardiovascular circulation model. The model should be registered to the musculo-skeletal system. The computational infrastructure for the cardiovascular model should provide for near real-time computation of blood flow and pressure in all parts of the body. The model should deal with vascular beds in all tissues, and the computational infrastructure for the model should provide links into CellML models of cell function and tissue function. In this work we review the literature associated with 1D blood flow modelling in the cardiovascular system, discuss model encoding standards, software and a model repository. We then describe the coordinate systems used to define the vascular geometry, derive the equations and discuss the implementation of these coupled equations in the open source computational software OpenCMISS. Finally, some preliminary results are presented and plans outlined for the next steps in the development of the model, the computational software and the graphical user interface for accessing the model. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Evolutionary inference via the Poisson Indel Process
Bouchard-Côté, Alexandre; Jordan, Michael I.
2013-01-01
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296
A computer simulation model of Wolbachia invasion for disease vector population modification.
Guevara-Souza, Mauricio; Vallejo, Edgar E
2015-10-05
Wolbachia invasion has been proved to be a promising alternative for controlling vector-borne diseases, particularly Dengue fever. Creating computer models that can provide insight into how vector population modification can be achieved under different conditions would be most valuable for assessing the efficacy of control strategies for this disease. In this paper, we present a computer model that simulates the behavior of native mosquito populations after the introduction of mosquitoes infected with the Wolbachia bacteria. We studied how different factors such as fecundity, fitness cost of infection, migration rates, number of populations, population size, and number of introduced infected mosquitoes affect the spread of the Wolbachia bacteria among native mosquito populations. Two main scenarios of the island model are presented in this paper, with infected mosquitoes introduced into the largest source population and peripheral populations. Overall, the results are promising; Wolbachia infection spreads among native populations and the computer model is capable of reproducing the results obtained by mathematical models and field experiments. Computer models can be very useful for gaining insight into how Wolbachia invasion works and are a promising alternative for complementing experimental and mathematical approaches for vector-borne disease control.
NASA Technical Reports Server (NTRS)
Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.
2005-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Technical Reports Server (NTRS)
Martin, Michael A.; Nguyen, Huy H.; Greene, William D.; Seymout, David C.
2003-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Large scale cardiac modeling on the Blue Gene supercomputer.
Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J
2008-01-01
Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.
Computational neuroanatomy: ontology-based representation of neural components and connectivity.
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-02-05
A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
Hybrid programming models for beyond-CMOS technologies will prove critical for integrating new computing technologies alongside our existing infrastructure. Unfortunately the software infrastructure required to enable this is lacking or not available. XACC is a programming framework for extreme-scale, post-exascale accelerator architectures that integrates alongside existing conventional applications. It is a pluggable framework for programming languages developed for next-gen computing hardware architectures like quantum and neuromorphic computing. It lets computational scientists efficiently off-load classically intractable work to attached accelerators through user-friendly Kernel definitions. XACC makes post-exascale hybrid programming approachable for domain computational scientists.
QM Automata: A New Class of Restricted Quantum Membrane Automata.
Giannakis, Konstantinos; Singh, Alexandros; Kastampolidou, Kalliopi; Papalitsas, Christos; Andronikos, Theodore
2017-01-01
The term "Unconventional Computing" describes the use of non-standard methods and models in computing. It is a recently established field, with many interesting and promising results. In this work we combine notions from quantum computing with aspects of membrane computing to define what we call QM automata. Specifically, we introduce a variant of quantum membrane automata that operate in accordance with the principles of quantum computing. We explore the functionality and capabilities of the QM automata through indicative examples. Finally we suggest future directions for research on QM automata.
More efficient optimization of long-term water supply portfolios
NASA Astrophysics Data System (ADS)
Kirsch, Brian R.; Characklis, Gregory W.; Dillard, Karen E. M.; Kelley, C. T.
2009-03-01
The use of temporary transfers, such as options and leases, has grown as utilities attempt to meet increases in demand while reducing dependence on the expansion of costly infrastructure capacity (e.g., reservoirs). Earlier work has been done to construct optimal portfolios comprising firm capacity and transfers, using decision rules that determine the timing and volume of transfers. However, such work has only focused on the short-term (e.g., 1-year scenarios), which limits the utility of these planning efforts. Developing multiyear portfolios can lead to the exploration of a wider range of alternatives but also increases the computational burden. This work utilizes a coupled hydrologic-economic model to simulate the long-term performance of a city's water supply portfolio. This stochastic model is linked with an optimization search algorithm that is designed to handle the high-frequency, low-amplitude noise inherent in many simulations, particularly those involving expected values. This noise is detrimental to the accuracy and precision of the optimized solution and has traditionally been controlled by investing greater computational effort in the simulation. However, the increased computational effort can be substantial. This work describes the integration of a variance reduction technique (control variate method) within the simulation/optimization as a means of more efficiently identifying minimum cost portfolios. Random variation in model output (i.e., noise) is moderated using knowledge of random variations in stochastic input variables (e.g., reservoir inflows, demand), thereby reducing the computing time by 50% or more. Using these efficiency gains, water supply portfolios are evaluated over a 10-year period in order to assess their ability to reduce costs and adapt to demand growth, while still meeting reliability goals. As a part of the evaluation, several multiyear option contract structures are explored and compared.
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.
2010-08-01
Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Research on Modeling Technology of Virtual Robot Based on LabVIEW
NASA Astrophysics Data System (ADS)
Wang, Z.; Huo, J. L.; Y Sun, L.; Y Hao, X.
2017-12-01
Because of the dangerous working environment, the underwater operation robot for nuclear power station needs manual teleoperation. In the process of operation, it is necessary to guide the position and orientation of the robot in real time. In this paper, the geometric modeling of the virtual robot and the working environment is accomplished by using SolidWorks software, and the accurate modeling and assembly of the robot are realized. Using LabVIEW software to read the model, and established the manipulator forward kinematics and inverse kinematics model, and realized the hierarchical modeling of virtual robot and computer graphics modeling. Experimental results show that the method studied in this paper can be successfully applied to robot control system.
ERIC Educational Resources Information Center
Sarnoff, Susan; Welch, Lonnie; Gradin, Sherrie; Sandell, Karin
2004-01-01
This paper will discuss the results of a project that enabled three faculty members from disparate disciplines: Social Work, Interpersonal Communication and Software Engineering, to enhance writing and critical thinking in their courses. The paper will address the Faculty-in-Residence project model, the activities taken on as a result of it, the…
Cortical Substrate of Haptic Representation
1993-08-24
experience and data from primates , we have developed computational models of short-term active memory. Such models may have technological interest...neurobiological work on primate memory. It is on that empirical work that our current theoretical efforts are 5 founded. Our future physiological research...Academy of Sciences, New York, vol. 608, pp. 318-329, 1990. J.M. Fuster - Behavioral electrophysiology of the prefrontal cortex of the primate . Progress
NASA Astrophysics Data System (ADS)
Aldakheel, Fadi; Wriggers, Peter; Miehe, Christian
2017-12-01
The modeling of failure in ductile materials must account for complex phenomena at the micro-scale, such as nucleation, growth and coalescence of micro-voids, as well as the final rupture at the macro-scale, as rooted in the work of Gurson (J Eng Mater Technol 99:2-15, 1977). Within a top-down viewpoint, this can be achieved by the combination of a micro-structure-informed elastic-plastic model for a porous medium with a concept for the modeling of macroscopic crack discontinuities. The modeling of macroscopic cracks can be achieved in a convenient way by recently developed continuum phase field approaches to fracture, which are based on the regularization of sharp crack discontinuities, see Miehe et al. (Comput Methods Appl Mech Eng 294:486-522, 2015). This avoids the use of complex discretization methods for crack discontinuities, and can account for complex crack patterns. In this work, we develop a new theoretical and computational framework for the phase field modeling of ductile fracture in conventional elastic-plastic solids under finite strain deformation. It combines modified structures of Gurson-Tvergaard-Needelman GTN-type plasticity model outlined in Tvergaard and Needleman (Acta Metall 32:157-169, 1984) and Nahshon and Hutchinson (Eur J Mech A Solids 27:1-17, 2008) with a new evolution equation for the crack phase field. An important aspect of this work is the development of a robust Explicit-Implicit numerical integration scheme for the highly nonlinear rate equations of the enhanced GTN model, resulting with a low computational cost strategy. The performance of the formulation is underlined by means of some representative examples, including the development of the experimentally observed cup-cone failure mechanism.
Model based defect characterization in composites
NASA Astrophysics Data System (ADS)
Roberts, R.; Holland, S.
2017-02-01
Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, with application in this paper focusing on ultrasound. A companion paper in these proceedings summarizes corresponding activity in thermography. Inversion of ultrasound data is demonstrated showing the quantitative extraction of damage properties.
Computer technology applications in industrial and organizational psychology.
Crespin, Timothy R; Austin, James T
2002-08-01
This article reviews computer applications developed and utilized by industrial-organizational (I-O) psychologists, both in practice and in research. A primary emphasis is on applications developed for Internet usage, because this "network of networks" changes the way I-O psychologists work. The review focuses on traditional and emerging topics in I-O psychology. The first topic involves information technology applications in measurement, defined broadly across levels of analysis (persons, groups, organizations) and domains (abilities, personality, attitudes). Discussion then focuses on individual learning at work, both in formal training and in coping with continual automation of work. A section on job analysis follows, illustrating the role of computers and the Internet in studying jobs. Shifting focus to the group level of analysis, we briefly review how information technology is being used to understand and support cooperative work. Finally, special emphasis is given to the emerging "third discipline" in I-O psychology research-computational modeling of behavioral events in organizations. Throughout this review, themes of innovation and dissemination underlie a continuum between research and practice. The review concludes by setting a framework for I-O psychology in a computerized and networked world.
Computer Technology-Integrated Projects Should not Supplant Craft Projects in Science Education
NASA Astrophysics Data System (ADS)
Klopp, Tabatha J.; Rule, Audrey C.; Suchsland Schneider, Jean; Boody, Robert M.
2014-03-01
The current emphasis on computer technology integration and narrowing of the curriculum has displaced arts and crafts. However, the hands-on, concrete nature of craft work in science modeling enables students to understand difficult concepts and to be engaged and motivated while learning spatial, logical, and sequential thinking skills. Analogy use is also helpful in understanding unfamiliar, complex science concepts. This study of 28 academically advanced elementary to middle-school students examined student work and perceptions during a science unit focused on four fossil organisms: crinoid, brachiopod, horn coral and trilobite. The study compared: (1) analogy-focused instruction to independent Internet research and (2) computer technology-rich products to crafts-based products. Findings indicate student products were more creative after analogy-based instruction and when made using technology. However, students expressed a strong desire to engage in additional craft work after making craft products and enjoyed making crafts more after analogy-focused instruction. Additionally, more science content was found in the craft products than the technology-rich products. Students expressed a particular liking for two of the fossil organisms because they had been modeled with crafts. The authors recommend that room should be retained for crafts in the science curriculum to model science concepts.
Emotion-affected decision making in human simulation.
Zhao, Y; Kang, J; Wright, D K
2006-01-01
Human modelling is an interdisciplinary research field. The topic, emotion-affected decision making, was originally a cognitive psychology issue, but is now recognized as an important research direction for both computer science and biomedical modelling. The main aim of this paper is to attempt to bridge the gap between psychology and bioengineering in emotion-affected decision making. The work is based on Ortony's theory of emotions and bounded rationality theory, and attempts to connect the emotion process with decision making. A computational emotion model is proposed, and the initial framework of this model in virtual human simulation within the platform of Virtools is presented.
Computer modeling of heat pipe performance
NASA Technical Reports Server (NTRS)
Peterson, G. P.
1983-01-01
A parametric study of the defining equations which govern the steady state operational characteristics of the Grumman monogroove dual passage heat pipe is presented. These defining equations are combined to develop a mathematical model which describes and predicts the operational and performance capabilities of a specific heat pipe given the necessary physical characteristics and working fluid. Included is a brief review of the current literature, a discussion of the governing equations, and a description of both the mathematical and computer model. Final results of preliminary test runs of the model are presented and compared with experimental tests on actual prototypes.
NASA Technical Reports Server (NTRS)
Wigton, Larry
1996-01-01
Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.
Delivering The Benefits of Chemical-Biological Integration in ...
Abstract: Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intention of this research program is to quickly evaluate thousands of chemicals for potential risk but with much reduced cost relative to historical approaches. This work involves computational and data driven approaches including high-throughput screening, modeling, text-mining and the integration of chemistry, exposure and biological data. We have developed a number of databases and applications that are delivering on the vision of developing a deeper understanding of chemicals and their effects on exposure and biological processes that are supporting a large community of scientists in their research efforts. This presentation will provide an overview of our work to bring together diverse large scale data from the chemical and biological domains, our approaches to integrate and disseminate these data, and the delivery of models supporting computational toxicology. This abstract does not reflect U.S. EPA policy. Presentation at ACS TOXI session on Computational Chemistry and Toxicology in Chemical Discovery and Assessement (QSARs).
Architecture for hospital information integration
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Janariz, Daniel L.; Martinez, Ralph
1999-07-01
The ongoing integration of hospital information systems (HIS) continues. Data storage systems, data networks and computers improve, data bases grow and health-care applications increase. Some computer operating systems continue to evolve and some fade. Health care delivery now depends on this computer-assisted environment. The result is the critical harmonization of the various hospital information systems becomes increasingly difficult. The purpose of this paper is to present an architecture for HIS integration that is computer-language-neutral and computer- hardware-neutral for the informatics applications. The proposed architecture builds upon the work done at the University of Arizona on middleware, the work of the National Electrical Manufacturers Association, and the American College of Radiology. It is a fresh approach to allowing applications engineers to access medical data easily and thus concentrates on the application techniques in which they are expert without struggling with medical information syntaxes. The HIS can be modeled using a hierarchy of information sub-systems thus facilitating its understanding. The architecture includes the resulting information model along with a strict but intuitive application programming interface, managed by CORBA. The CORBA requirement facilitates interoperability. It should also reduce software and hardware development times.
Parallelization of a hydrological model using the message passing interface
Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji
2013-01-01
With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.
Modeling Physiological Systems in the Human Body as Networks of Quasi-1D Fluid Flows
NASA Astrophysics Data System (ADS)
Staples, Anne
2008-11-01
Extensive research has been done on modeling human physiology. Most of this work has been aimed at developing detailed, three-dimensional models of specific components of physiological systems, such as a cell, a vein, a molecule, or a heart valve. While efforts such as these are invaluable to our understanding of human biology, if we were to construct a global model of human physiology with this level of detail, computing even a nanosecond in this computational being's life would certainly be prohibitively expensive. With this in mind, we derive the Pulsed Flow Equations, a set of coupled one-dimensional partial differential equations, specifically designed to capture two-dimensional viscous, transport, and other effects, and aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. Our goal is to be able to perform faster-than-real time simulations of global processes in the human body on desktop computers.
Xiao, Bo; Imel, Zac E; Georgiou, Panayiotis; Atkins, David C; Narayanan, Shrikanth S
2016-05-01
Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, and facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation and offer a series of open problems for future research.
A COMPUTER MODELING STUDY OF BINDING PROPERTIES OF CHIRAL NUCLEOPEPTIDE FOR BIOMEDICAL APPLICATIONS.
Pirtskhalava, M; Egoyan, A; Mirtskhulava, M; Roviello, G
2017-12-01
Nucleopeptides often show interesting properties of molecular binding that render them good candidates for development of innovative drugs for anticancer and antiviral therapies. In this work we present results of computer modeling of interactions between the molecules of hexathymine nucleopeptide (T6) and poly rA RNA (A18). The results of geometry optimization calculated using Hyperchem software and our own computer program for molecular docking show that molecules establish stable complexes due to the complementary-nucleobase interaction and the electrostatic interaction between the negative phosphate group of poly rA and the positively-charged residues present in the cationic nucleopeptide structure. Computer modeling makes it possible to find the optimal binding configuration of the molecules of a nucleopeptide and poly rA RNA and to estimate the binding energy between the molecules.
Automatic discovery of the communication network topology for building a supercomputer model
NASA Astrophysics Data System (ADS)
Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim
2016-10-01
The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks
NASA Astrophysics Data System (ADS)
Kyo, Koki
Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.
Towards a unified theory of neocortex: laminar cortical circuits for vision and cognition.
Grossberg, Stephen
2007-01-01
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of preattentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Computational Biochemistry-Enzyme Mechanisms Explored.
Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias
2017-01-01
Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Connectionist Modelling of Short-Term Memory.
ERIC Educational Resources Information Center
Norris, Dennis; And Others
1995-01-01
Presents the first stage in a research effort developing a detailed computational model of working memory. The central feature of the model is counterintuitive. It is assumed that there is a primacy gradient of activation across successive list items. A second stage of the model is influenced by the combined effects of the primacy gradient and…
Computer-Assisted Simulation Methods of Learning Process
ERIC Educational Resources Information Center
Mayer, Robert V.
2015-01-01
In this article we analyse: 1) one-component models of training; 2) the multi-component models considering transition of weak knowledge in strong and vice versa; and 3) the models considering change of working efficiency of the pupil during the day. The results of imitating modeling are presented, graphs of dependences of the pupil's knowledge on…
Validations of CFD against detailed velocity and pressure measurements in water turbine runner flow
NASA Astrophysics Data System (ADS)
Nilsson, H.; Davidson, L.
2003-03-01
This work compares CFD results with experimental results of the flow in two different kinds of water turbine runners. The runners studied are the GAMM Francis runner and the Hölleforsen Kaplan runner. The GAMM Francis runner was used as a test case in the 1989 GAMM Workshop on 3D Computation of Incompressible Internal Flows where the geometry and detailed best efficiency measurements were made available. In addition to the best efficiency measurements, four off-design operating condition measurements are used for the comparisons in this work. The Hölleforsen Kaplan runner was used at the 1999 Turbine 99 and 2001 Turbine 99 - II workshops on draft tube flow, where detailed measurements made after the runner were used as inlet boundary conditions for the draft tube computations. The measurements are used here to validate computations of the flow in the runner.The computations are made in a single runner blade passage where the inlet boundary conditions are obtained from an extrapolation of detailed measurements (GAMM) or from separate guide vane computations (Hölleforsen). The steady flow in a rotating co-ordinate system is computed. The effects of turbulence are modelled by a low-Reynolds number k- turbulence model, which removes some of the assumptions of the commonly used wall function approach and brings the computations one step further.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
NASA Technical Reports Server (NTRS)
Pandya, Abhilash; Maida, James; Hasson, Scott; Greenisen, Michael; Woolford, Barbara
1993-01-01
As manned exploration of space continues, analytical evaluation of human strength characteristics is critical. These extraterrestrial environments will spawn issues of human performance which will impact the designs of tools, work spaces, and space vehicles. Computer modeling is an effective method of correlating human biomechanical and anthropometric data with models of space structures and human work spaces. The aim of this study is to provide biomechanical data from isolated joints to be utilized in a computer modeling system for calculating torque resulting from any upper extremity motions: in this study, the ratchet wrench push-pull operation (a typical extravehicular activity task). Established here are mathematical relationships used to calculate maximum torque production of isolated upper extremity joints. These relationships are a function of joint angle and joint velocity.
Krishnaraj, R Navanietha; Chandran, Saravanan; Pal, Parimal; Berchmans, Sheela
2013-12-01
There is an immense interest among the researchers to identify new herbicides which are effective against the herbs without affecting the environment. In this work, photosynthetic pigments are used as the ligands to predict their herbicidal activity. The enzyme 5-enolpyruvylshikimate-3-phosphate (EPSP) synthase is a good target for the herbicides. Homology modeling of the target enzyme is done using Modeler 9.11 and the model is validated. Docking studies were performed with AutoDock Vina algorithm to predict the binding of the natural pigments such as β-carotene, chlorophyll a, chlorophyll b, phycoerythrin and phycocyanin to the target. β-carotene, phycoerythrin and phycocyanin have higher binding energies indicating the herbicidal activity of the pigments. This work reports a procedure to screen herbicides with computational molecular approach. These pigments will serve as potential bioherbicides in the future.
Carson, Anne; Troy, Douglas
2007-01-01
Nursing and computer science students and faculty worked with the American Red Cross to investigate the potential for information technology to provide Red Cross disaster services nurses with improved access to accurate community resources in times of disaster. Funded by a national three-year grant, this interdisciplinary partnership led to field testing of an information system to support local community disaster preparedness at seven Red Cross chapters across the United States. The field test results demonstrate the benefits of the technology and the value of interdisciplinary research. The work also created a sustainable learning and research model for the future. This paper describes the collaborative model employed in this interdisciplinary research and exemplifies the benefits to faculty and students of well-timed interdisciplinary and community collaboration. PMID:18600129
A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.
2012-01-01
ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data on the backshell of a scaled model of the Orion Multi-Purpose Crew Vehicle (MPCV). Heat augmentation effects due to the presence of cavities and RCS jet firings were also investigated. The high quality data produced by this effort presents a new set of data which can be used to assess the performance of CFD methods. In this work, a hybrid LES/RANS model developed at North Carolina State University (NCSU) is used to simulate several runs from these experiments, and evaluate the performance of high fidelity methods as compared to more typical RANS models. .
A Computational Approach for Model Update of an LS-DYNA Energy Absorbing Cell
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Jackson, Karen E.; Kellas, Sotiris
2008-01-01
NASA and its contractors are working on structural concepts for absorbing impact energy of aerospace vehicles. Recently, concepts in the form of multi-cell honeycomb-like structures designed to crush under load have been investigated for both space and aeronautics applications. Efforts to understand these concepts are progressing from tests of individual cells to tests of systems with hundreds of cells. Because of fabrication irregularities, geometry irregularities, and material properties uncertainties, the problem of reconciling analytical models, in particular LS-DYNA models, with experimental data is a challenge. A first look at the correlation results between single cell load/deflection data with LS-DYNA predictions showed problems which prompted additional work in this area. This paper describes a computational approach that uses analysis of variance, deterministic sampling techniques, response surface modeling, and genetic optimization to reconcile test with analysis results. Analysis of variance provides a screening technique for selection of critical parameters used when reconciling test with analysis. In this study, complete ignorance of the parameter distribution is assumed and, therefore, the value of any parameter within the range that is computed using the optimization procedure is considered to be equally likely. Mean values from tests are matched against LS-DYNA solutions by minimizing the square error using a genetic optimization. The paper presents the computational methodology along with results obtained using this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brower, Richard C.
This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less
Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy
NASA Astrophysics Data System (ADS)
Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.
2010-03-01
Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
NASA Astrophysics Data System (ADS)
Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan
2016-06-01
In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doss, E.D.; Sikes, W.C.
1992-09-01
This report describes the work performed during Phase 1 and Phase 2 of the collaborative research program established between Argonne National Laboratory (ANL) and Newport News Shipbuilding and Dry Dock Company (NNS). Phase I of the program focused on the development of computer models for Magnetohydrodynamic (MHD) propulsion. Phase 2 focused on the experimental validation of the thruster performance models and the identification, through testing, of any phenomena which may impact the attractiveness of this propulsion system for shipboard applications. The report discusses in detail the work performed in Phase 2 of the program. In Phase 2, a two Teslamore » test facility was designed, built, and operated. The facility test loop, its components, and their design are presented. The test matrix and its rationale are discussed. Representative experimental results of the test program are presented, and are compared to computer model predictions. In general, the results of the tests and their comparison with the predictions indicate that thephenomena affecting the performance of MHD seawater thrusters are well understood and can be accurately predicted with the developed thruster computer models.« less
Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.
Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng
2014-10-01
Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.
Computer Technology-Integrated Projects Should Not Supplant Craft Projects in Science Education
ERIC Educational Resources Information Center
Klopp, Tabatha J.; Rule, Audrey C.; Schneider, Jean Suchsland; Boody, Robert M.
2014-01-01
The current emphasis on computer technology integration and narrowing of the curriculum has displaced arts and crafts. However, the hands-on, concrete nature of craft work in science modeling enables students to understand difficult concepts and to be engaged and motivated while learning spatial, logical, and sequential thinking skills. Analogy…
Developing Instructional Applications at the Secondary Level. The Computer as a Tool.
ERIC Educational Resources Information Center
McManus, Jack; And Others
Case studies are presented for seven Los Angeles area (California) high schools that worked with Pepperdine University in the IBM/ETS (International Business Machines/Educational Testing Service) Model Schools program, a project which provided training for selected secondary school teachers in the use of personal computers and selected software as…
Computer-Aided Training for Transport Planners: Experience with the Pluto Package.
ERIC Educational Resources Information Center
Bonsall, P. W.
1995-01-01
Describes the PLUTO model, an interactive computer program designed for use in education and training of city planners and engineers. Emphasizes four issues: (1) the balance between realism and simplification; (2) the design of the user interface; (3) comparative advantages of group and solo working; and (4) factors affecting the decision to…
ERIC Educational Resources Information Center
Morrison, Robert G.; Doumas, Leonidas A. A.; Richland, Lindsey E.
2011-01-01
Theories accounting for the development of analogical reasoning tend to emphasize either the centrality of relational knowledge accretion or changes in information processing capability. Simulations in LISA (Hummel & Holyoak, 1997, 2003), a neurally inspired computer model of analogical reasoning, allow us to explore how these factors may…
The Roles of Mental Animations and External Animations in Understanding Mechanical Systems
ERIC Educational Resources Information Center
Hegarty, Mary; Kriz, Sarah; Cate, Christina
2003-01-01
The effects of computer animations and mental animation on people's mental models of a mechanical system are examined. In 3 experiments, students learned how a mechanical system works from various instructional treatments including viewing a static diagram of the machine, predicting motion from static diagrams, viewing computer animations, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boman, Erik G.; Catalyurek, Umit V.; Chevalier, Cedric
2015-01-16
This final progress report summarizes the work accomplished at the Combinatorial Scientific Computing and Petascale Simulations Institute. We developed Zoltan, a parallel mesh partitioning library that made use of accurate hypergraph models to provide load balancing in mesh-based computations. We developed several graph coloring algorithms for computing Jacobian and Hessian matrices and organized them into a software package called ColPack. We developed parallel algorithms for graph coloring and graph matching problems, and also designed multi-scale graph algorithms. Three PhD students graduated, six more are continuing their PhD studies, and four postdoctoral scholars were advised. Six of these students and Fellowsmore » have joined DOE Labs (Sandia, Berkeley), as staff scientists or as postdoctoral scientists. We also organized the SIAM Workshop on Combinatorial Scientific Computing (CSC) in 2007, 2009, and 2011 to continue to foster the CSC community.« less
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Wei, Zhenglun Alan; Sonntag, Simon Johannes; Toma, Milan; Singh-Gryzbon, Shelly; Sun, Wei
2018-04-19
The governing international standard for the development of prosthetic heart valves is International Organization for Standardization (ISO) 5840. This standard requires the assessment of the thrombus potential of transcatheter heart valve substitutes using an integrated thrombus evaluation. Besides experimental flow field assessment and ex vivo flow testing, computational fluid dynamics is a critical component of this integrated approach. This position paper is intended to provide and discuss best practices for the setup of a computational model, numerical solving, post-processing, data evaluation and reporting, as it relates to transcatheter heart valve substitutes. This paper is not intended to be a review of current computational technology; instead, it represents the position of the ISO working group consisting of experts from academia and industry with regards to considerations for computational fluid dynamic assessment of transcatheter heart valve substitutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.
We compute the renormalisation factors (Z-matrices) of the ΔF = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with nf = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to MS¯, the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used tomore » compute the BSM neutral kaon mixing matrix elements in the companion paper. In conclusion, we argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.« less
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.; ...
2017-10-10
We compute the renormalisation factors (Z-matrices) of the ΔF = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with nf = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to MS¯, the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used tomore » compute the BSM neutral kaon mixing matrix elements in the companion paper. In conclusion, we argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.« less
Grimby-Ekman, Anna; Andersson, Eva M; Hagberg, Mats
2009-06-19
In the literature there are discussions on the choice of outcome and the need for more longitudinal studies of musculoskeletal disorders. The general aim of this longitudinal study was to analyze musculoskeletal neck pain, in a group of young adults. Specific aims were to determine whether psychosocial factors, computer use, high work/study demands, and lifestyle are long-term or short-term factors for musculoskeletal neck pain, and whether these factors are important for developing or ongoing musculoskeletal neck pain. Three regression models were used to analyze the different outcomes. Pain at present was analyzed with a marginal logistic model, for number of years with pain a Poisson regression model was used and for developing and ongoing pain a logistic model was used. Presented results are odds ratios and proportion ratios (logistic models) and rate ratios (Poisson model). The material consisted of web-based questionnaires answered by 1204 Swedish university students from a prospective cohort recruited in 2002. Perceived stress was a risk factor for pain at present (PR = 1.6), for developing pain (PR = 1.7) and for number of years with pain (RR = 1.3). High work/study demands was associated with pain at present (PR = 1.6); and with number of years with pain when the demands negatively affect home life (RR = 1.3). Computer use pattern (number of times/week with a computer session > or = 4 h, without break) was a risk factor for developing pain (PR = 1.7), but also associated with pain at present (PR = 1.4) and number of years with pain (RR = 1.2). Among life style factors smoking (PR = 1.8) was found to be associated to pain at present. The difference between men and women in prevalence of musculoskeletal pain was confirmed in this study. It was smallest for the outcome ongoing pain (PR = 1.4) compared to pain at present (PR = 2.4) and developing pain (PR = 2.5). By using different regression models different aspects of neck pain pattern could be addressed and the risk factors impact on pain pattern was identified. Short-term risk factors were perceived stress, high work/study demands and computer use pattern (break pattern). Those were also long-term risk factors. For developing pain perceived stress and computer use pattern were risk factors.
Grimby-Ekman, Anna; Andersson, Eva M; Hagberg, Mats
2009-01-01
Background In the literature there are discussions on the choice of outcome and the need for more longitudinal studies of musculoskeletal disorders. The general aim of this longitudinal study was to analyze musculoskeletal neck pain, in a group of young adults. Specific aims were to determine whether psychosocial factors, computer use, high work/study demands, and lifestyle are long-term or short-term factors for musculoskeletal neck pain, and whether these factors are important for developing or ongoing musculoskeletal neck pain. Methods Three regression models were used to analyze the different outcomes. Pain at present was analyzed with a marginal logistic model, for number of years with pain a Poisson regression model was used and for developing and ongoing pain a logistic model was used. Presented results are odds ratios and proportion ratios (logistic models) and rate ratios (Poisson model). The material consisted of web-based questionnaires answered by 1204 Swedish university students from a prospective cohort recruited in 2002. Results Perceived stress was a risk factor for pain at present (PR = 1.6), for developing pain (PR = 1.7) and for number of years with pain (RR = 1.3). High work/study demands was associated with pain at present (PR = 1.6); and with number of years with pain when the demands negatively affect home life (RR = 1.3). Computer use pattern (number of times/week with a computer session ≥ 4 h, without break) was a risk factor for developing pain (PR = 1.7), but also associated with pain at present (PR = 1.4) and number of years with pain (RR = 1.2). Among life style factors smoking (PR = 1.8) was found to be associated to pain at present. The difference between men and women in prevalence of musculoskeletal pain was confirmed in this study. It was smallest for the outcome ongoing pain (PR = 1.4) compared to pain at present (PR = 2.4) and developing pain (PR = 2.5). Conclusion By using different regression models different aspects of neck pain pattern could be addressed and the risk factors impact on pain pattern was identified. Short-term risk factors were perceived stress, high work/study demands and computer use pattern (break pattern). Those were also long-term risk factors. For developing pain perceived stress and computer use pattern were risk factors. PMID:19545386
Computational Modeling of Tissue Self-Assembly
NASA Astrophysics Data System (ADS)
Neagu, Adrian; Kosztin, Ioan; Jakab, Karoly; Barz, Bogdan; Neagu, Monica; Jamison, Richard; Forgacs, Gabor
As a theoretical framework for understanding the self-assembly of living cells into tissues, Steinberg proposed the differential adhesion hypothesis (DAH) according to which a specific cell type possesses a specific adhesion apparatus that combined with cell motility leads to cell assemblies of various cell types in the lowest adhesive energy state. Experimental and theoretical efforts of four decades turned the DAH into a fundamental principle of developmental biology that has been validated both in vitro and in vivo. Based on computational models of cell sorting, we have developed a DAH-based lattice model for tissues in interaction with their environment and simulated biological self-assembly using the Monte Carlo method. The present brief review highlights results on specific morphogenetic processes with relevance to tissue engineering applications. Our own work is presented on the background of several decades of theoretical efforts aimed to model morphogenesis in living tissues. Simulations of systems involving about 105 cells have been performed on high-end personal computers with CPU times of the order of days. Studied processes include cell sorting, cell sheet formation, and the development of endothelialized tubes from rings made of spheroids of two randomly intermixed cell types, when the medium in the interior of the tube was different from the external one. We conclude by noting that computer simulations based on mathematical models of living tissues yield useful guidelines for laboratory work and can catalyze the emergence of innovative technologies in tissue engineering.
Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A
2017-04-01
Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results obtained in an acute respiratory distress syndrome patient show the potential of this approach for personalized computationally guided optimization of mechanical ventilation in future. Copyright © 2017 the American Physiological Society.
Thill, Serge; Padó, Sebastian; Ziemke, Tom
2014-07-01
The recent trend in cognitive robotics experiments on language learning, symbol grounding, and related issues necessarily entails a reduction of sensorimotor aspects from those provided by a human body to those that can be realized in machines, limiting robotic models of symbol grounding in this respect. Here, we argue that there is a need for modeling work in this domain to explicitly take into account the richer human embodiment even for concrete concepts that prima facie relate merely to simple actions, and illustrate this using distributional methods from computational linguistics which allow us to investigate grounding of concepts based on their actual usage. We also argue that these techniques have applications in theories and models of grounding, particularly in machine implementations thereof. Similarly, considering the grounding of concepts in human terms may be of benefit to future work in computational linguistics, in particular in going beyond "grounding" concepts in the textual modality alone. Overall, we highlight the overall potential for a mutually beneficial relationship between the two fields. Copyright © 2014 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Sun, Deyu; Rettmann, Maryam E.; Holmes, David R.; Linte, Cristian A.; Packer, Douglas; Robb, Richard A.
2014-03-01
In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.
Transport and Dynamics in Toroidal Fusion Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sovinec, Carl
The study entitled, "Transport and Dynamics in Toroidal Fusion Systems," (TDTFS) applied analytical theory and numerical computation to investigate topics of importance to confining plasma, the fourth state of matter, with magnetic fields. A central focus of the work is how non-thermal components of the ion particle distribution affect the "sawtooth" collective oscillation in the core of the tokamak magnetic configuration. Previous experimental and analytical research had shown and described how the oscillation frequency decreases and amplitude increases, leading to "monster" or "giant" sawteeth, when the non-thermal component is increased by injecting particle beams or by exciting ions with imposedmore » electromagnetic waves. The TDTFS study applied numerical computation to self-consistently simulate the interaction between macroscopic collective plasma dynamics and the non-thermal particles. The modeling used the NIMROD code [Sovinec, Glasser, Gianakon, et al., J. Comput. Phys. 195, 355 (2004)] with the energetic component represented by simulation particles [Kim, Parker, Sovinec, and the NIMROD Team, Comput. Phys. Commun. 164, 448 (2004)]. The computations found decreasing growth rates for the instability that drives the oscillations, but they were ultimately limited from achieving experimentally relevant parameters due to computational practicalities. Nonetheless, this effort provided valuable lessons for integrated simulation of macroscopic plasma dynamics. It also motivated an investigation of the applicability of fluid-based modeling to the ion temperature gradient instability, leading to the journal publication [Schnack, Cheng, Barnes, and Parker, Phys. Plasmas 20, 062106 (2013)]. Apart from the tokamak-specific topics, the TDTFS study also addressed topics in the basic physics of magnetized plasma and in the dynamics of the reversed-field pinch (RFP) configuration. The basic physics work contributed to a study of two-fluid effects on interchange dynamics, where "two-fluid" refers to modeling independent dynamics of electron and ion species without full kinetic effects. In collaboration with scientist Ping Zhu, who received separate support, it was found that the rule-of-thumb criteria on stabilizing interchange has caveats that depend on the plasma density and temperature profiles. This work was published in [Zhu, Schnack, Ebrahimi, et al., Phys. Rev. Lett. 101, 085005 (2008)]. An investigation of general nonlinear relaxation with fluid models was partially supported by the TDTFS study and led to the publication [Khalzov, Ebrahimi, Schnack, and Mirnov, Phys. Plasmas 19, 012111 (2012)]. Work specific to the RFP included an investigation of interchange at large plasma pressure and support for applications [for example, Scheffel, Schnack, and Mirza, Nucl. Fusion 53, 113007 (2013)] of the DEBS code [Schnack, Barnes, Mikic, Harned, and Caramana, J. Comput. Phys. 70, 330 (1987)]. Finally, the principal investigator over most of the award period, Dalton Schnack, supervised a numerical study of modeling magnetic island suppression [Jenkins, Kruger, Hegna, Schnack, and Sovinec, Phys. Plasmas 17, 12502 (2010)].« less
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd Arbogast; Steve Bryant; Clint N. Dawson
1998-08-31
This report describes briefly the work of the Center for Subsurface Modeling (CSM) of the University of Texas at Austin (and Rice University prior to September 1995) on the Partnership in Computational Sciences Consortium (PICS) project entitled Grand Challenge Problems in Environmental Modeling and Remediation: Groundwater Contaminant Transport.
Theoretical studies of solar lasers and converters
NASA Technical Reports Server (NTRS)
Heinbockel, John H.
1990-01-01
The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.
Neuromorphic Computing for Temporal Scientific Data Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuman, Catherine D.; Potok, Thomas E.; Young, Steven
In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and synapses required.
Viejo, Guillaume; Khamassi, Mehdi; Brovelli, Andrea; Girard, Benoît
2015-01-01
Current learning theory provides a comprehensive description of how humans and other animals learn, and places behavioral flexibility and automaticity at heart of adaptive behaviors. However, the computations supporting the interactions between goal-directed and habitual decision-making systems are still poorly understood. Previous functional magnetic resonance imaging (fMRI) results suggest that the brain hosts complementary computations that may differentially support goal-directed and habitual processes in the form of a dynamical interplay rather than a serial recruitment of strategies. To better elucidate the computations underlying flexible behavior, we develop a dual-system computational model that can predict both performance (i.e., participants' choices) and modulations in reaction times during learning of a stimulus–response association task. The habitual system is modeled with a simple Q-Learning algorithm (QL). For the goal-directed system, we propose a new Bayesian Working Memory (BWM) model that searches for information in the history of previous trials in order to minimize Shannon entropy. We propose a model for QL and BWM coordination such that the expensive memory manipulation is under control of, among others, the level of convergence of the habitual learning. We test the ability of QL or BWM alone to explain human behavior, and compare them with the performance of model combinations, to highlight the need for such combinations to explain behavior. Two of the tested combination models are derived from the literature, and the latter being our new proposal. In conclusion, all subjects were better explained by model combinations, and the majority of them are explained by our new coordination proposal. PMID:26379518
Accuracy evaluation of dental models manufactured by CAD/CAM milling method and 3D printing method.
Jeong, Yoo-Geum; Lee, Wan-Sun; Lee, Kyu-Bok
2018-06-01
To evaluate the accuracy of a model made using the computer-aided design/computer-aided manufacture (CAD/CAM) milling method and 3D printing method and to confirm its applicability as a work model for dental prosthesis production. First, a natural tooth model (ANA-4, Frasaco, Germany) was scanned using an oral scanner. The obtained scan data were then used as a CAD reference model (CRM), to produce a total of 10 models each, either using the milling method or the 3D printing method. The 20 models were then scanned using a desktop scanner and the CAD test model was formed. The accuracy of the two groups was compared using dedicated software to calculate the root mean square (RMS) value after superimposing CRM and CAD test model (CTM). The RMS value (152±52 µm) of the model manufactured by the milling method was significantly higher than the RMS value (52±9 µm) of the model produced by the 3D printing method. The accuracy of the 3D printing method is superior to that of the milling method, but at present, both methods are limited in their application as a work model for prosthesis manufacture.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
COMPILATION OF GROUND-WATER MODELS
Ground-water modeling is a computer-based methodology for mathematical analysis of the mechanisms and controls of ground-water systems for the evaluation of policies, action, and designs that may affect such systems. n addition to satisfying scientific interest in the workings of...
Microstructure-Based Computational Modeling of Mechanical Behavior of Polymer Micro/Nano Composites
2013-12-01
K. ......... 165 Fig. 5.11. Comparison between experimental data and calibrated numerical models for displacement control tests, at three different...displacement control simulation) for all mesh densities for both work-conjugate and non work-conjugate. ........................ 302 Fig. 9.3. Damage...some large deformation experimental tests (and also accepting the non -uniformity of the strain field). In the established well-known theorem for
Modeling the effect of shroud contact and friction dampers on the mistuned response of turbopumps
NASA Technical Reports Server (NTRS)
Griffin, Jerry H.; Yang, M.-T.
1994-01-01
The contract has been revised. Under the revised scope of work a reduced order model has been developed that can be used to predict the steady-state response of mistuned bladed disks. The approach has been implemented in a computer code, LMCC. It is concluded that: the reduced order model displays structural fidelity comparable to that of a finite element model of an entire bladed disk system with significantly improved computational efficiency; and, when the disk is stiff, both the finite element model and LMCC predict significantly more amplitude variation than was predicted by earlier models. This second result may have important practical ramifications, especially in the case of integrally bladed disks.
Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.
Janiga, Gábor
2014-04-01
This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.
2010-12-01
Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.
FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
NASA Technical Reports Server (NTRS)
Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry
2015-01-01
Human missions beyond low earth orbit to destinations, such as to Mars and asteroids will expose astronauts to novel operational conditions that may pose health risks that are currently not well understood and perhaps unanticipated. In addition, there are limited clinical and research data to inform development and implementation of health risk countermeasures for these missions. Consequently, NASA's Digital Astronaut Project (DAP) is working to develop and implement computational models and simulations (M&S) to help predict and assess spaceflight health and performance risks, and enhance countermeasure development. In order to effectively accomplish these goals, the DAP evaluates its models and simulations via a rigorous verification, validation and credibility assessment process to ensure that the computational tools are sufficiently reliable to both inform research intended to mitigate potential risk as well as guide countermeasure development. In doing so, DAP works closely with end-users, such as space life science researchers, to establish appropriate M&S credibility thresholds. We will present and demonstrate the process the DAP uses to vet computational M&S for space biomedical analysis using real M&S examples. We will also provide recommendations on how the larger space biomedical community can employ these concepts to enhance the credibility of their M&S codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annapureddy, Harsha Vardhan Reddy; Nune, Satish K.; Motkuri, Radha K.
2015-01-08
Computational studies on nanofluids composed of metal organic frameworks (MOFs) were performed using molecular modeling techniques. Grand Canonical Monte Carlo (GCMC) simulations were used to study adsorption behavior of 1,1,1,3,3-pentafluoropropane (R-245fa) in a MIL-101 MOF at various temperatures. To understand the stability of the nanofluid composed of MIL-101 particles, we performed molecular dynamics simulations to compute potentials of mean force between hypothetical MIL-101 fragments terminated with two different kinds of modulators in R-245fa and water. Our computed potentials of mean force results indicate that the MOF particles tend to disperse better in water than in R-245fa. The reasons for thismore » observation were analyzed and discussed. Our results agree with experimental results indicating that the employed potential models and modeling approaches provide good description of molecular interactions and the reliabilities. Work performed by LXD was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. Work performed by HVRA, SKN, RKM, and PBM was supported by the Office of Energy Efficiency and Renewable Energy, Geothermal Technologies Program. Pacific Northwest National Laboratory is a multiprogram national laboratory operated for DOE by Battelle.« less
A fast collocation method for a variable-coefficient nonlocal diffusion model
NASA Astrophysics Data System (ADS)
Wang, Che; Wang, Hong
2017-02-01
We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.
Virtual Control Systems Environment (VCSE)
Atkins, Will
2018-02-14
Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.
NASA Technical Reports Server (NTRS)
1990-01-01
FluiDyne Engineering Corporation, Minneapolis, MN is one of the world's leading companies in design and construction of wind tunnels. In its designing work, FluiDyne uses a computer program called GTRAN. With GTRAN, engineers create a design and test its performance on the computer before actually building a model; should the design fail to meet criteria, the system or any component part can be redesigned and retested on the computer, saving a great deal of time and money.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Brian James; Yin, Lin; Stark, David James
This proposal sought of order 1M core-hours of Institutional Computing time intended to enable computing by a new LANL Postdoc (David Stark) working under LDRD ER project 20160472ER (PI: Lin Yin) on laser-ion acceleration. The project was “off-cycle,” initiating in June of 2016 with a postdoc hire.
Collective Properties of Neural Systems and Their Relation to Other Physical Models
1988-08-05
been computed explicitly. This has been achieved algorithmically by utilizing methods introduced earlier. It should be emphasized that in addition to...Research Institute for Mathematical Sciences. K’oto Universin. K roto 606. .apan and E. BAROUCH Department of Mathematics and Computer Sciene. Clarkon...Mathematics and Computer Science, Clarkson University, where this work was collaborated. References I. IBabu, S. V. and Barouch E., An exact soIlution for the
Validation of a Node-Centered Wall Function Model for the Unstructured Flow Code FUN3D
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee; Vasta, Veer N.; White, Jeffery
2015-01-01
In this paper, the implementation of two wall function models in the Reynolds averaged Navier-Stokes (RANS) computational uid dynamics (CFD) code FUN3D is described. FUN3D is a node centered method for solving the three-dimensional Navier-Stokes equations on unstructured computational grids. The first wall function model, based on the work of Knopp et al., is used in conjunction with the one-equation turbulence model of Spalart-Allmaras. The second wall function model, also based on the work of Knopp, is used in conjunction with the two-equation k-! turbulence model of Menter. The wall function models compute the wall momentum and energy flux, which are used to weakly enforce the wall velocity and pressure flux boundary conditions in the mean flow momentum and energy equations. These wall conditions are implemented in an implicit form where the contribution of the wall function model to the Jacobian are also included. The boundary conditions of the turbulence transport equations are enforced explicitly (strongly) on all solid boundaries. The use of the wall function models is demonstrated on four test cases: a at plate boundary layer, a subsonic di user, a 2D airfoil, and a 3D semi-span wing. Where possible, different near-wall viscous spacing tactics are examined. Iterative residual convergence was obtained in most cases. Solution results are compared with theoretical and experimental data for several variations of grid spacing. In general, very good comparisons with data were achieved.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
HEP Software Foundation Community White Paper Working Group - Detector Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apostolakis, J.
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main componentsmore » of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.« less
Computational Intelligence‐Assisted Understanding of Nature‐Inspired Superhydrophobic Behavior
Zhang, Xia; Ding, Bei; Dixon, Sebastian C.
2017-01-01
Abstract In recent years, state‐of‐the‐art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature‐inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials. PMID:29375975
Breast tumor malignancy modelling using evolutionary neural logic networks.
Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia
2006-01-01
The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.
Computational Intelligence-Assisted Understanding of Nature-Inspired Superhydrophobic Behavior.
Zhang, Xia; Ding, Bei; Cheng, Ran; Dixon, Sebastian C; Lu, Yao
2018-01-01
In recent years, state-of-the-art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature-inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials.
Computational modeling of cardiac hemodynamics: Current status and future outlook
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.
2016-01-01
The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.
Bottoni, Paolo; Cinque, Luigi; De Marsico, Maria; Levialdi, Stefano; Panizzi, Emanuele
2006-06-01
This paper reports on the research activities performed by the Pictorial Computing Laboratory at the University of Rome, La Sapienza, during the last 5 years. Such work, essentially is based on the study of humancomputer interaction, spans from metamodels of interaction down to prototypes of interactive systems for both synchronous multimedia communication and groupwork, annotation systems for web pages, also encompassing theoretical and practical issues of visual languages and environments also including pattern recognition algorithms. Some applications are also considered like e-learning and collaborative work.
Multidimensional computer simulation of Stirling cycle engines
NASA Technical Reports Server (NTRS)
Hall, C. A.; Porsching, T. A.; Medley, J.; Tew, R. C.
1990-01-01
The computer code ALGAE (algorithms for the gas equations) treats incompressible, thermally expandable, or locally compressible flows in complicated two-dimensional flow regions. The solution method, finite differencing schemes, and basic modeling of the field equations in ALGAE are applicable to engineering design settings of the type found in Stirling cycle engines. The use of ALGAE to model multiple components of the space power research engine (SPRE) is reported. Videotape computer simulations of the transient behavior of the working gas (helium) in the heater-regenerator-cooler complex of the SPRE demonstrate the usefulness of such a program in providing information on thermal and hydraulic phenomena in multiple component sections of the SPRE.
Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2010-01-01
This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.
A Model-Driven Approach for Telecommunications Network Services Definition
NASA Astrophysics Data System (ADS)
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
Development of X-TOOLSS: Preliminary Design of Space Systems Using Evolutionary Computation
NASA Technical Reports Server (NTRS)
Schnell, Andrew R.; Hull, Patrick V.; Turner, Mike L.; Dozier, Gerry; Alverson, Lauren; Garrett, Aaron; Reneau, Jarred
2008-01-01
Evolutionary computational (EC) techniques such as genetic algorithms (GA) have been identified as promising methods to explore the design space of mechanical and electrical systems at the earliest stages of design. In this paper the authors summarize their research in the use of evolutionary computation to develop preliminary designs for various space systems. An evolutionary computational solver developed over the course of the research, X-TOOLSS (Exploration Toolset for the Optimization of Launch and Space Systems) is discussed. With the success of early, low-fidelity example problems, an outline of work involving more computationally complex models is discussed.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Hierarchic models for laminated plates
NASA Technical Reports Server (NTRS)
Szabo, Barna A.; Actis, Ricardo L.
1991-01-01
The research conducted in the formulation of hierarchic models for laminated plates is described. The work is an extension of the work done for laminated strips. The use of a single parameter, beta, is investigated that represents the degree to which the equilibrium equations of three dimensional elasticity are satisfied. The powers of beta identify members in the hierarchic sequence. Numerical examples that were analyzed with the proposed sequence of models are included. The results obtained for square plates with uniform loading and homogeneous boundary conditions are very encouraging. Several cross-ply and angle-ply laminates were evaluated and the results compared with those of the fully three dimensional model, computed using MSC/PROBE, and with previously reported work on laminated strips.
NASA Astrophysics Data System (ADS)
Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan
2016-06-01
Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.
ELEMENT MASSES IN THE CRAB NEBULA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.
Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii] λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.
Computer modelling of technogenic thermal pollution zones in large water bodies
NASA Astrophysics Data System (ADS)
Parshakova, Ya N.; Lyubimova, T. P.
2018-01-01
In the present work, the thermal pollution zones created due to discharge of heated water from thermal power plants are investigated using the example of the Permskaya Thermal Power Plant (Permskaya TPP or Permskaya GRES), which is one of the largest thermal power plants in Europe. The study is performed for different technological and hydrometeorological conditions. Since the vertical temperature distribution in such wastewater reservoirs is highly inhomogeneous, the computations are performed in the framework of 3D model.
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Whole earth modeling: developing and disseminating scientific software for computational geophysics.
NASA Astrophysics Data System (ADS)
Kellogg, L. H.
2016-12-01
Historically, a great deal of specialized scientific software for modeling and data analysis has been developed by individual researchers or small groups of scientists working on their own specific research problems. As the magnitude of available data and computer power has increased, so has the complexity of scientific problems addressed by computational methods, creating both a need to sustain existing scientific software, and expand its development to take advantage of new algorithms, new software approaches, and new computational hardware. To that end, communities like the Computational Infrastructure for Geodynamics (CIG) have been established to support the use of best practices in scientific computing for solid earth geophysics research and teaching. Working as a scientific community enables computational geophysicists to take advantage of technological developments, improve the accuracy and performance of software, build on prior software development, and collaborate more readily. The CIG community, and others, have adopted an open-source development model, in which code is developed and disseminated by the community in an open fashion, using version control and software repositories like Git. One emerging issue is how to adequately identify and credit the intellectual contributions involved in creating open source scientific software. The traditional method of disseminating scientific ideas, peer reviewed publication, was not designed for review or crediting scientific software, although emerging publication strategies such software journals are attempting to address the need. We are piloting an integrated approach in which authors are identified and credited as scientific software is developed and run. Successful software citation requires integration with the scholarly publication and indexing mechanisms as well, to assign credit, ensure discoverability, and provide provenance for software.
High level cognitive information processing in neural networks
NASA Technical Reports Server (NTRS)
Barnden, John A.; Fields, Christopher A.
1992-01-01
Two related research efforts were addressed: (1) high-level connectionist cognitive modeling; and (2) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic model of local neural circuits, and to understand the computational behavior of such models. In keeping with the nature of NASA's Innovative Research Program, all the work conducted under the grant was highly innovative. For instance, the following ideas, all summarized, are contributions to the study of connectionist/neural networks: (1) the temporal-winner-take-all, relative-position encoding, and pattern-similarity association techniques; (2) the importation of logical combinators into connection; (3) the use of analogy-based reasoning as a bridge across the gap between the traditional symbolic paradigm and the connectionist paradigm; and (4) the application of connectionism to the domain of belief representation/reasoning. The work on local neural circuit modeling also departs significantly from the work of related researchers. In particular, its concentration on low-level neural phenomena that could support high-level cognitive processing is unusual within the area of biological local circuit modeling, and also serves to expand the horizons of the artificial neural net field.
Linking Working Memory and Long-Term Memory: A Computational Model of the Learning of New Words
ERIC Educational Resources Information Center
Jones, Gary; Gobet, Fernand; Pine, Julian M.
2007-01-01
The nonword repetition (NWR) test has been shown to be a good predictor of children's vocabulary size. NWR performance has been explained using phonological working memory, which is seen as a critical component in the learning of new words. However, no detailed specification of the link between phonological working memory and long-term memory…
Unsteady flow model for circulation-control airfoils
NASA Technical Reports Server (NTRS)
Rao, B. M.
1979-01-01
An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xiao, Bo; Imel, Zac E.; Georgiou, Panayiotis; Atkins, David C.; Narayanan, Shrikanth S.
2017-01-01
Empathy is an important psychological process that facilitates human communication and interaction. Enhancement of empathy has profound significance in a range of applications. In this paper, we review emerging directions of research on computational analysis of empathy expression and perception as well as empathic interactions, including their simulation. We summarize the work on empathic expression analysis by the targeted signal modalities (e.g., text, audio, facial expressions). We categorize empathy simulation studies into theory-based emotion space modeling or application-driven user and context modeling. We summarize challenges in computational study of empathy including conceptual framing and understanding of empathy, data availability, appropriate use and validation of machine learning techniques, and behavior signal processing. Finally, we propose a unified view of empathy computation, and offer a series of open problems for future research. PMID:27017830
Development and application of theoretical models for Rotating Detonation Engine flowfields
NASA Astrophysics Data System (ADS)
Fievisohn, Robert
As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.
The Role of Working Memory in Metaphor Production and Comprehension
ERIC Educational Resources Information Center
Chiappe, Dan L.; Chiappe, Penny
2007-01-01
The following tested Kintsch's [Kintsch, W. (2000). "Metaphor comprehension: a computational theory." "Psychonomic Bulletin & Review," 7, 257-266 and Kintsch, W. (2001). "Predication." "Cognitive Science," 25, 173-202] Predication Model, which predicts that working memory capacity is an important factor in metaphor processing. In support of his…
ERIC Educational Resources Information Center
Mayer, Richard E.; Moreno, Roxana
1998-01-01
Multimedia learners (n=146 college students) were able to integrate words and computer-presented pictures more easily when the words were presented aurally rather than visually. This split-attention effect is consistent with a dual-processing model of working memory. (SLD)
Technology Acceptance in Social Work Education: Implications for the Field Practicum
ERIC Educational Resources Information Center
Colvin, Alex Don; Bullock, Angela N.
2014-01-01
The exponential growth and sophistication of new information and computer technology (ICT) have greatly influenced human interactions and provided new metaphors for understanding the world. The acceptance and integration of ICT into social work field education are examined here using the technological acceptance model. This article also explores…
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
NASA Astrophysics Data System (ADS)
Rasky, Daniel J.; Milstein, Frederick
1986-02-01
Milstein and Hill previously derived formulas for computing the bulk and shear moduli, κ, μ, and μ', at arbitrary pressures, for cubic crystals in which interatomic interaction energies are modeled by pairwise functions, and they carried out the moduli computations using the complete family of Morse functions. The present study extends their work to a pseudopotential description of atomic binding. Specifically: (1) General formulas are derived for determining these moduli under hydrostatic loading within the framework of a pseudopotential model. (2) A two-parameter pseudopotential model is used to describe atomic binding of the alkali metals, and the two parameters are determined from experimental data (the model employs the Heine-Abarenkov potential with the Taylor dielectric function). (3) For each alkali metal (Li, Na, K, Rb, and Cs), the model is used to compute the pressure-versus-volume behavior and, at zero pressure, the binding energy, the density, and the elastic moduli and their pressure derivatives; the theoretical behavior is found to be in excellent agreement with experiment. (4) Calculations are made of κ, μ, and μ' of the bcc alkali metals over wide ranges of hydrostatic compression and expansion. (5) The pseudopotential results are compared with those of arbitrary-central-force models (wherein κ-(2/3)μ=μ'+2P) and with the specific Morse-function results. The pressures, bulk moduli, and zero-pressure shear moduli (as determined for the Morse and pseudopotential models) are in excellent agreement, but important differences appear in the shear moduli under high compressions. The computations in the present paper are for the bcc metals; a subsequent paper will extend this work to include both the bcc and fcc structures, at compressions and expansions where elastic stability or lattice cohesion is, in practice, lost.
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
Helfer, Peter; Shultz, Thomas R
2014-12-01
The widespread availability of calorie-dense food is believed to be a contributing cause of an epidemic of obesity and associated diseases throughout the world. One possible countermeasure is to empower consumers to make healthier food choices with useful nutrition labeling. An important part of this endeavor is to determine the usability of existing and proposed labeling schemes. Here, we report an experiment on how four different labeling schemes affect the speed and nutritional value of food choices. We then apply decision field theory, a leading computational model of human decision making, to simulate the experimental results. The psychology experiment shows that quantitative, single-attribute labeling schemes have greater usability than multiattribute and binary ones, and that they remain effective under moderate time pressure. The computational model simulates these psychological results and provides explanatory insights into them. This work shows how experimental psychology and computational modeling can contribute to the evaluation and improvement of nutrition-labeling schemes. © 2014 New York Academy of Sciences.
High performance hybrid functional Petri net simulations of biological pathway models on CUDA.
Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.
Studies of Human Memory and Language Processing.
ERIC Educational Resources Information Center
Collins, Allan M.
The purposes of this study were to determine the nature of human semantic memory and to obtain knowledge usable in the future development of computer systems that can converse with people. The work was based on a computer model which is designed to comprehend English text, relating the text to information stored in a semantic data base that is…
Computational Toxicology at the US EPA | Science Inventory ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t
Computer-aided drug design at Boehringer Ingelheim
NASA Astrophysics Data System (ADS)
Muegge, Ingo; Bergner, Andreas; Kriegl, Jan M.
2017-03-01
Computer-Aided Drug Design (CADD) is an integral part of the drug discovery endeavor at Boehringer Ingelheim (BI). CADD contributes to the evaluation of new therapeutic concepts, identifies small molecule starting points for drug discovery, and develops strategies for optimizing hit and lead compounds. The CADD scientists at BI benefit from the global use and development of both software platforms and computational services. A number of computational techniques developed in-house have significantly changed the way early drug discovery is carried out at BI. In particular, virtual screening in vast chemical spaces, which can be accessed by combinatorial chemistry, has added a new option for the identification of hits in many projects. Recently, a new framework has been implemented allowing fast, interactive predictions of relevant on and off target endpoints and other optimization parameters. In addition to the introduction of this new framework at BI, CADD has been focusing on the enablement of medicinal chemists to independently perform an increasing amount of molecular modeling and design work. This is made possible through the deployment of MOE as a global modeling platform, allowing computational and medicinal chemists to freely share ideas and modeling results. Furthermore, a central communication layer called the computational chemistry framework provides broad access to predictive models and other computational services.
Computational neuroanatomy: ontology-based representation of neural components and connectivity
Rubin, Daniel L; Talos, Ion-Florin; Halle, Michael; Musen, Mark A; Kikinis, Ron
2009-01-01
Background A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning. Results We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications. Conclusion Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future. PMID:19208191
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
NASA Astrophysics Data System (ADS)
Semenov, Sergey; Carle, Florian; Medale, Marc; Brutin, David
2017-12-01
The work is focused on obtaining boundary conditions for a one-sided numerical model of thermoconvective instabilities in evaporating pinned sessile droplets of ethanol on heated substrates. In the one-sided model, appropriate boundary conditions for heat and mass transfer equations are required at the droplet surface. Such boundary conditions are obtained in the present work based on a derived semiempirical theoretical formula for the total droplet's evaporation rate, and on a two-parametric nonisothermal approximation of the local evaporation flux. The main purpose of these boundary conditions is to be applied in future three-dimensional (3D) one-sided numerical models in order to save a lot of computational time and resources by solving equations only in the droplet domain. Two parameters, needed for the nonisothermal approximation of the local evaporation flux, are obtained by fitting computational results of a 2D two-sided numerical model. Such model is validated here against parabolic flight experiments and the theoretical value of the total evaporation rate. This study combines theoretical, experimental, and computational approaches in convective evaporation of sessile droplets. The influence of the gravity level on evaporation rate and contributions of different mechanisms of vapor transport (diffusion, Stefan flow, natural convection) are shown. The qualitative difference (in terms of developing thermoconvective instabilities) between steady-state and unsteady numerical approaches is demonstrated.
The Nucleon Axial Form Factor and Staggered Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Aaron Scott
The study of neutrino oscillation physics is a major research goal of the worldwide particle physics program over the upcoming decade. Many new experiments are being built to study the properties of neutrinos and to answer questions about the phenomenon of neutrino oscillation. These experiments need precise theoretical cross sections in order to access fundamental neutrino properties. Neutrino oscillation experiments often use large atomic nuclei as scattering targets, which are challenging for theorists to model. Nuclear models rely on free-nucleon amplitudes as inputs. These amplitudes are constrained by scattering experiments with large nuclear targets that rely on the very samemore » nuclear models. The work in this dissertation is the rst step of a new initiative to isolate and compute elementary amplitudes with theoretical calculations to support the neutrino oscillation experimental program. Here, the eort focuses on computing the axial form factor, which is the largest contributor of systematic error in the primary signal measurement process for neutrino oscillation studies, quasielastic scattering. Two approaches are taken. First, neutrino scattering data on a deuterium target are reanalyzed with a model-independent parametrization of the axial form factor to quantify the present uncertainty in the free-nucleon amplitudes. The uncertainties on the free-nucleon cross section are found to be underestimated by about an order of magnitude compared to the ubiquitous dipole model parametrization. The second approach uses lattice QCD to perform a rst-principles computation of the nucleon axial form factor. The Highly Improved Staggered Quark (HISQ) action is employed for both valence and sea quarks. The results presented in this dissertation are computed at physical pion mass for one lattice spacing. This work presents a computation of the axial form factor at zero momentum transfer, and forms the basis for a computation of the axial form factor momentum dependence with an extrapolation to the continuum limit and a full systematic error budget.« less
The Center for Computational Biology: resources, achievements, and challenges
Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2011-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221
The Center for Computational Biology: resources, achievements, and challenges.
Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2012-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.
Game Theory for Proactive Dynamic Defense and Attack Mitigation in Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letchford, Joshua
While there has been a great deal of security research focused on preventing attacks, there has been less work on how one should balance security and resilience investments. In this work we developed and evaluated models that captured both explicit defenses and other mitigations that reduce the impact of attacks. We examined these issues both in more broadly applicable general Stackelberg models and in more specific network and power grid settings. Finally, we compared these solutions to existing work in terms of both solution quality and computational overhead.
Perlow, Richard; Jattuso, Mia
2018-06-01
Researchers have operationalized working memory in different ways and although working memory-performance relationships are well documented, there has been relatively less attention devoted to determining whether seemingly similar measures yield comparable relations with performance outcomes. Our objective is to assess whether two working memory measures deploying the same processes but different item content yield different relations with two problem-solving criteria. Participants completed a computation-based working memory measure and a reading-based measure prior to performing a computerized simulation. Results reveal differential relations with one of the two criteria and support the notion that the two working memory measures tap working memory capacity and other cognitive abilities. One implication for theory development is that researchers should consider incorporating other cognitive abilities in their working memory models and that the selection of those abilities should correspond to the criterion of interest. One practical implication is that researchers and practitioners shouldn't automatically assume that different phonological loop-based working memory scales are interchangeable.
Understanding the Hows and Whys of Decision-Making: From Expected Utility to Divisive Normalization.
Glimcher, Paul
2014-01-01
Over the course of the last century, economists and ethologists have built detailed models from first principles of how humans and animals should make decisions. Over the course of the last few decades, psychologists and behavioral economists have gathered a wealth of data at variance with the predictions of these economic models. This has led to the development of highly descriptive models that can often predict what choices people or animals will make but without offering any insight into why people make the choices that they do--especially when those choices reduce a decision-maker's well-being. Over the course of the last two decades, neurobiologists working with economists and psychologists have begun to use our growing understanding of how the nervous system works to develop new models of how the nervous system makes decisions. The result, a growing revolution at the interdisciplinary border of neuroscience, psychology, and economics, is a new field called Neuroeconomics. Emerging neuroeconomic models stand to revolutionize our understanding of human and animal choice behavior by combining fundamental properties of neurobiological representation with decision-theoretic analyses. In this overview, one class of these models, based on the widely observed neural computation known as divisive normalization, is presented in detail. The work demonstrates not only that a discrete class of computation widely observed in the nervous system is fundamentally ubiquitous, but how that computation shapes behaviors ranging from visual perception to financial decision-making. It also offers the hope of reconciling economic analysis of what choices we should make with psychological observations of the choices we actually do make. Copyright © 2014 Cold Spring Harbor Laboratory Press; all rights reserved.
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
Nanotoxicity prediction using computational modelling - review and future directions
NASA Astrophysics Data System (ADS)
Saini, Bhavna; Srivastava, Sumit
2018-04-01
Nanomaterials has stimulated various outlooks for future in a number of industries and scientific ventures. A number of applications such as cosmetics, medicines, and electronics are employing nanomaterials due to their various compelling properties. The unending growth of nanomaterials usage in our daily life has escalated the health and environmental risks. Early nanotoxicity recognition is a big challenge. Various researches are going on in the field of nanotoxicity, which comprised of several problems such as inadequacy of proper datasets, lack of appropriate rules and characterization of nanomaterials. Computational modelling would be beneficial asset for nanomaterials researchers because it can foresee the toxicity, rest on previous experimental data. In this study, we have reviewed sufficient work demonstrating a proper pathway to proceed with QSAR analysis of Nanomaterials for toxicity modelling. The paper aims at providing comprehensive insight of Nano QSAR, various theories, tools and approaches used, along with an outline for future research directions to work on.
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Moore, James B.; Long, David L.
2017-01-01
Airframe noise is a growing concern in the vicinity of airports because of population growth and gains in engine noise reduction that have rendered the airframe an equal contributor during the approach and landing phases of flight for many transport aircraft. The leading-edge-slat device of a typical high-lift system for transport aircraft is a prominent source of airframe noise. Two technologies have significant potential for slat noise reduction; the slat-cove filler (SCF) and the slat-gap filler (SGF). Previous work was done on a 2D section of a transport-aircraft wing to demonstrate the implementation feasibility of these concepts. Benchtop hardware was developed in that work for qualitative parametric study. The benchtop models were mechanized for quantitative measurements of performance. Computational models of the mechanized benchtop apparatus for the SCF were developed and the performance of the system for five different SCF assemblies is demonstrated.
Sinking bubbles in stout beers
NASA Astrophysics Data System (ADS)
Lee, W. T.; Kaar, S.; O'Brien, S. B. G.
2018-04-01
A surprising phenomenon witnessed by many is the sinking bubbles seen in a settling pint of stout beer. Bubbles are less dense than the surrounding fluid so how does this happen? Previous work has shown that the explanation lies in a circulation of fluid promoted by the tilted sides of the glass. However, this work has relied heavily on computational fluid dynamics (CFD) simulations. Here, we show that the phenomenon of sinking bubbles can be predicted using a simple analytic model. To make the model analytically tractable, we work in the limit of small bubbles and consider a simplified geometry. The model confirms both the existence of sinking bubbles and the previously proposed mechanism.
Evaluation of the durability of 3D printed keys produced by computational processing of image data
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Kerlin, Scott
2016-05-01
Possession of a working 3D printed key can, for most practical purposes, convince observers that an illicit attempt to gain premises access is authorized. This paper seeks to assess three things. First, work has been performed to determine how easily the data for making models of keys can be obtained through manual measurement. It then presents work done to create a model of the key and determine how easy key modeling could be (particularly after a first key of a given key `blank' has been made). Finally, it seeks to assess the durability of the keys produced using 3D printing.
Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, mercedes C.
2006-01-01
The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.
2013-01-01
PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) RAND...prototype model illustrating concretely a new approach. The prototype model itself should be seen not as a definitive end point, but rather as a...9 1 4 8 1 4 8 1 1 1 1 2 2 1 2 2 2 8 9 2 8 9 3 9 9 2 8 8 2 8 9 2 9 9 1 1 3 1 2 4 1 2 4 xviii A Computational Model of Public Support for Insurgency
Program to Produce Tabulated Data Set Describing NSWC Burn Model for Hydrodynamic Computations
1990-09-11
helpful insights of Dr. Raafat Guirguis of the Naval Surface Warfare Center on how the NSWC Burn Model works, and Drs. Schittke and Feisler of...R. Guirguis ) 1 R13 (P. Miller ) 1 R13 (K. Kin) 2 R13 (C. Coffey) 1 R13 (H. Sandusky) 1 R13 (D. Tasker) 1 R13 (E. Lanar) 1 R13 (J. Forbes) 1 R13 (R...NAVSWC TR 90-364 AD-A238 710 PROGRAM TO PRODUCE TABULATED DATA SET DESCRIBING NSWC BURN MODEL FOR HYDRODYNAMIC COMPUTATIONS BY LEWIS C. HUDSON III
NASA Technical Reports Server (NTRS)
Liang, Shoudan
2000-01-01
Our research effort has produced nine publications in peer-reviewed journals listed at the end of this report. The work reported here are in the following areas: (1) genetic network modeling; (2) autocatalytic model of pre-biotic evolution; (3) theoretical and computational studies of strongly correlated electron systems; (4) reducing thermal oscillations in atomic force microscope; (5) transcription termination mechanism in prokaryotic cells; and (6) the low glutamine usage in thennophiles obtained by studying completely sequenced genomes. We discuss the main accomplishments of these publications.
Multi-GPU Jacobian accelerated computing for soft-field tomography.
Borsic, A; Attardo, E A; Halter, R J
2012-10-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on four GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 min to 14 s. We regard this as an important step toward gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for EIT, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the adjoint method.
Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography
Borsic, A.; Attardo, E. A.; Halter, R. J.
2012-01-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times on a single NVIDIA S1070 GPU, and of 50 times on 4 GPUs, bringing the Jacobian computing time for a fine 3D mesh from 12 minutes to 14 seconds. We regard this as an important step towards gaining interactive reconstruction times in 3D imaging, particularly when coupled in the future with acceleration of the forward problem. While we demonstrate results for Electrical Impedance Tomography, these results apply to any soft-field imaging modality where the Jacobian matrix is computed with the Adjoint Method. PMID:23010857
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Great Lakes modeling: Are the mathematics outpacing the data and our understanding of the system?
Mathematical modeling in the Great Lakes has come a long way from the pioneering work done by Manhattan College in the 1970s, when the models operated on coarse computational grids (often lake-wide) and used simple eutrophication formulations. Moving forward 40 years, we are now...
The objective of current work is to develop a new cancer dose-response assessment for chloroform using a physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) model. The PBPK/PD model is based on a mode of action in which the cytolethality of chloroform occurs when the ...
A survey of numerical models for wind prediction
NASA Technical Reports Server (NTRS)
Schonfeld, D.
1980-01-01
A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Usseglio Viretta, Francois L; Graf, Peter A
This presentation describes research work led by NREL with team members from Argonne National Laboratory and Texas A&M University in microstructure analysis, modeling and validation under DOE's Computer-Aided Engineering of Batteries (CAEBAT) program. The goal of the project is to close the gaps between CAEBAT models and materials research by creating predictive models that can be used for electrode design.
NASA Astrophysics Data System (ADS)
An, Soyoung; Choi, Woochul; Paik, Se-Bum
2015-11-01
Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.
PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.
Klarner, Hannes; Streck, Adam; Siebert, Heike
2017-03-01
The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Use of Parallel Micro-Platform for the Simulation the Space Exploration
NASA Astrophysics Data System (ADS)
Velasco Herrera, Victor Manuel; Velasco Herrera, Graciela; Rosano, Felipe Lara; Rodriguez Lozano, Salvador; Lucero Roldan Serrato, Karen
The purpose of this work is to create a parallel micro-platform, that simulates the virtual movements of a space exploration in 3D. One of the innovations presented in this design consists of the application of a lever mechanism for the transmission of the movement. The development of such a robot is a challenging task very different of the industrial manipulators due to a totally different target system of requirements. This work presents the study and simulation, aided by computer, of the movement of this parallel manipulator. The development of this model has been developed using the platform of computer aided design Unigraphics, in which it was done the geometric modeled of each one of the components and end assembly (CAD), the generation of files for the computer aided manufacture (CAM) of each one of the pieces and the kinematics simulation of the system evaluating different driving schemes. We used the toolbox (MATLAB) of aerospace and create an adaptive control module to simulate the system.
Editorial: Computational Creativity, Concept Invention, and General Intelligence
NASA Astrophysics Data System (ADS)
Besold, Tarek R.; Kühnberger, Kai-Uwe; Veale, Tony
2015-12-01
Over the last decade, computational creativity as a field of scientific investigation and computational systems engineering has seen growing popularity. Still, the levels of development between projects aiming at systems for artistic production or performance and endeavours addressing creative problem-solving or models of creative cognitive capacities is diverging. While the former have already seen several great successes, the latter still remain in their infancy. This volume collects reports on work trying to close the accrued gap.
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
NASA Astrophysics Data System (ADS)
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)
Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong
2014-01-01
Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
ThinkerTools. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"ThinkerTools" is a computer-based program that aims to develop students' understanding of physics and scientific modeling. The program is composed of two curricula for middle school students, "ThinkerTools Inquiry" and "Model-Enhanced ThinkerTools". "ThinkerTools Inquiry" allows students to explore the…
NASA Astrophysics Data System (ADS)
Jamjareegulgarn, Punyawi; Supnithi, Pornchai; Watthanasangmechai, Kornyanat; Yokoyama, Tatsuhiro; Tsugawa, Takuya; Ishii, Mamoru
2017-07-01
This paper proposes a new expression for computing the bottomside thickness parameter at equatorial latitude station, Chumphon (10.72°N, 99.37°E), Thailand. Its diurnal variations from 2004 to 2006 at this location are then studied. The proposed expression is derived based on two experimental data sources: FMCW ionosonde and dual-frequency GPS system, and some expressions of the NeQuick 2 model. Hence, after both the bottomside thickness parameter computed by the proposed equation, B2bot_Pro, and the bottomside shape parameter (namely, B1_Pro in this work) are computed, the bottomside electron density and the height where the bottomside electron density drops down to be 24% of the NmF2 (namely, h0.24) can be computed and shown in this work using the analytical functions of the IRI model. Moreover, the diurnal variations of the B2bot_Pro are compared with those computed from the NeQuick model, B2bot_NeQ, and the predicted B0 of the IRI-2012 model with ABT-2009 and Bil-2000 options (namely, ;B0_ABT; and ;B0_Bil;, respectively). The averaged, minimum, and maximum values of percentage deviations among these bottomside thickness parameters are also computed and shown in this work. Our results show that the diurnal variations of B2bot_Pro at Chumphon station have the following patterns: they start to increase during nighttime to the first peaks during pre-sunrise hours, and then decrease abruptly to their minimum values during sunrise hours. Afterward, they increase again to reach the second peaks around local noontime and fall gradually to their starting times during 20-04 LT. The diurnal variations of B2bot_Pro follow generally the same trends as those of the B2bot_NeQ and the B0_ABT, except pre-sunrise hours. The pre-sunrise peaks and sunrise collapses in both the B2bot_NeQ and the B0_ABT can be found occasionally. On the other hand, the diurnal variations in B2bot_Pro differ from those in B0_Bil due to the flattened variation in B0_Bil and the pre-sunrise peaks as well as sunrise collapses in B0_Bil disappear. The pre-sunrise peaks of the B2bot_Pro at the Chumphon station are higher than those of the B2bot_NeQ, the B0_ABT, and the observed B0 at other regions. Furthermore, the percentage deviations between the B2bot_Pro and the B0_ABT (PD_B2B0ABT) are mostly lower than 30% for all seasons of the studied years, opposite to the other percentage deviations studied in this work. The proposed B2bot_Pro parameters in this work follow a similar trend to the B2bot_NeQ and the B0_ABT, but it is not conclusive that the proposed values are equivalent to them.
Modeling Advance Life Support Systems
NASA Technical Reports Server (NTRS)
Pitts, Marvin; Sager, John; Loader, Coleen; Drysdale, Alan
1996-01-01
Activities this summer consisted of two projects that involved computer simulation of bioregenerative life support systems for space habitats. Students in the Space Life Science Training Program (SLSTP) used the simulation, space station, to learn about relationships between humans, fish, plants, and microorganisms in a closed environment. One student complete a six week project to modify the simulation by converting the microbes from anaerobic to aerobic, and then balancing the simulation's life support system. A detailed computer simulation of a closed lunar station using bioregenerative life support was attempted, but there was not enough known about system restraints and constants in plant growth, bioreactor design for space habitats and food preparation to develop an integrated model with any confidence. Instead of a completed detailed model with broad assumptions concerning the unknown system parameters, a framework for an integrated model was outlined and work begun on plant and bioreactor simulations. The NASA sponsors and the summer Fell were satisfied with the progress made during the 10 weeks, and we have planned future cooperative work.
Computational Modeling of Hydrodynamics and Scour around Underwater Munitions
NASA Astrophysics Data System (ADS)
Liu, X.; Xu, Y.
2017-12-01
Munitions deposited in water bodies are a big threat to human health, safety, and environment. It is thus imperative to predict the motion and the resting status of the underwater munitions. A multitude of physical processes are involved, which include turbulent flows, sediment transport, granular material mechanics, 6 degree-of-freedom motion of the munition, and potential liquefaction. A clear understanding of this unique physical setting is currently lacking. Consequently, it is extremely hard to make reliable predictions. In this work, we present the computational modeling of two importance processes, i.e., hydrodynamics and scour, around munition objects. Other physical processes are also considered in our comprehensive model. However, they are not shown in this talk. To properly model the dynamics of the deforming bed and the motion of the object, an immersed boundary method is implemented in the open source CFD package OpenFOAM. Fixed bed and scour cases are simulated and compared with laboratory experiments. The future work of this project will implement the coupling between all the physical processes.
Automated Performance Prediction of Message-Passing Parallel Programs
NASA Technical Reports Server (NTRS)
Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.
Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang
2013-01-01
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941
Mayne, Richard; Adamatzky, Andrew; Jones, Jeff
2015-01-01
The plasmodium of slime mold Physarum polycephalum behaves as an amorphous reaction-diffusion computing substrate and is capable of apparently ‘intelligent’ behavior. But how does intelligence emerge in an acellular organism? Through a range of laboratory experiments, we visualize the plasmodial cytoskeleton—a ubiquitous cellular protein scaffold whose functions are manifold and essential to life—and discuss its putative role as a network for transducing, transmitting and structuring data streams within the plasmodium. Through a range of computer modeling techniques, we demonstrate how emergent behavior, and hence computational intelligence, may occur in cytoskeletal communications networks. Specifically, we model the topology of both the actin and tubulin cytoskeletal networks and discuss how computation may occur therein. Furthermore, we present bespoke cellular automata and particle swarm models for the computational process within the cytoskeleton and observe the incidence of emergent patterns in both. Our work grants unique insight into the origins of natural intelligence; the results presented here are therefore readily transferable to the fields of natural computation, cell biology and biomedical science. We conclude by discussing how our results may alter our biological, computational and philosophical understanding of intelligence and consciousness. PMID:26478782
Mayne, Richard; Adamatzky, Andrew; Jones, Jeff
2015-01-01
The plasmodium of slime mold Physarum polycephalum behaves as an amorphous reaction-diffusion computing substrate and is capable of apparently 'intelligent' behavior. But how does intelligence emerge in an acellular organism? Through a range of laboratory experiments, we visualize the plasmodial cytoskeleton-a ubiquitous cellular protein scaffold whose functions are manifold and essential to life-and discuss its putative role as a network for transducing, transmitting and structuring data streams within the plasmodium. Through a range of computer modeling techniques, we demonstrate how emergent behavior, and hence computational intelligence, may occur in cytoskeletal communications networks. Specifically, we model the topology of both the actin and tubulin cytoskeletal networks and discuss how computation may occur therein. Furthermore, we present bespoke cellular automata and particle swarm models for the computational process within the cytoskeleton and observe the incidence of emergent patterns in both. Our work grants unique insight into the origins of natural intelligence; the results presented here are therefore readily transferable to the fields of natural computation, cell biology and biomedical science. We conclude by discussing how our results may alter our biological, computational and philosophical understanding of intelligence and consciousness.
Mechanical properties of hydroxyapatite single crystals from nanoindentation data
Zamiri, A.; De, S.
2011-01-01
In this paper we compute elasto-plastic properties of hydroxyapatite single crystals from nanindentation data using a two-step algorithm. In the first step the yield stress is obtained using hardness and Young’s modulus data, followed by the computation of the flow parameters. The computational approach is first validated with data from existing literature. It is observed that hydroxyapatite single crystals exhibit anisotropic mechanical response with a lower yield stress along the [1010] crystallographic direction compared to the [0001] direction. Both work hardening rate and work hardening exponent are found to be higher for indentation along the [0001] crystallographic direction. The stress-strain curves extracted here could be used for developing constitutive models for hydroxyapatite single crystals. PMID:21262492
Fungible weights in logistic regression.
Jones, Jeff A; Waller, Niels G
2016-06-01
In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Scalco, Andrea; Ceschi, Andrea; Sartori, Riccardo
2018-01-01
It is likely that computer simulations will assume a greater role in the next future to investigate and understand reality (Rand & Rust, 2011). Particularly, agent-based models (ABMs) represent a method of investigation of social phenomena that blend the knowledge of social sciences with the advantages of virtual simulations. Within this context, the development of algorithms able to recreate the reasoning engine of autonomous virtual agents represents one of the most fragile aspects and it is indeed crucial to establish such models on well-supported psychological theoretical frameworks. For this reason, the present work discusses the application case of the theory of planned behavior (TPB; Ajzen, 1991) in the context of agent-based modeling: It is argued that this framework might be helpful more than others to develop a valid representation of human behavior in computer simulations. Accordingly, the current contribution considers issues related with the application of the model proposed by the TPB inside computer simulations and suggests potential solutions with the hope to contribute to shorten the distance between the fields of psychology and computer science.
Predicting neutron damage using TEM with in situ ion irradiation and computer modeling
NASA Astrophysics Data System (ADS)
Kirk, Marquis A.; Li, Meimei; Xu, Donghua; Wirth, Brian D.
2018-01-01
We have constructed a computer model of irradiation defect production closely coordinated with TEM and in situ ion irradiation of Molybdenum at 80 °C over a range of dose, dose rate and foil thickness. We have reexamined our previous ion irradiation data to assign appropriate error and uncertainty based on more recent work. The spatially dependent cascade cluster dynamics model is updated with recent Molecular Dynamics results for cascades in Mo. After a careful assignment of both ion and neutron irradiation dose values in dpa, TEM data are compared for both ion and neutron irradiated Mo from the same source material. Using the computer model of defect formation and evolution based on the in situ ion irradiation of thin foils, the defect microstructure, consisting of densities and sizes of dislocation loops, is predicted for neutron irradiation of bulk material at 80 °C and compared with experiment. Reasonable agreement between model prediction and experimental data demonstrates a promising direction in understanding and predicting neutron damage using a closely coordinated program of in situ ion irradiation experiment and computer simulation.
3-D modeling of ductile tearing using finite elements: Computational aspects and techniques
NASA Astrophysics Data System (ADS)
Gullerud, Arne Stewart
This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.