Science.gov

Sample records for de-fg03-01er54617 computer modeling

  1. Final Report DOE Grant No. DE-FG03-01ER54617 Computer Modeling of Microturbulence and Macrostability Properties of Magnetically Confined Plasmas

    SciTech Connect

    Jean-Noel Leboeuf

    2004-03-04

    OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D.

  2. Computational Modeling of Tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)

    1995-01-01

    This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.

  3. Computer Models of Proteins

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.

  4. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. Computer Model Documentation Guide.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    These guidelines for communicating effectively the details of computer model design and operation to persons with varying interests in a model recommend the development of four different types of manuals to meet the needs of managers, users, analysts and programmers. The guidelines for preparing a management summary manual suggest a broad spectrum…

  6. Computationally modeling interpersonal trust.

    PubMed

    Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David

    2013-01-01

    We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

  7. Computationally modeling interpersonal trust

    PubMed Central

    Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David

    2013-01-01

    We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649

  8. Computational models of planning.

    PubMed

    Geffner, Hector

    2013-07-01

    The selection of the action to do next is one of the central problems faced by autonomous agents. Natural and artificial systems address this problem in various ways: action responses can be hardwired, they can be learned, or they can be computed from a model of the situation, the actions, and the goals. Planning is the model-based approach to action selection and a fundamental ingredient of intelligent behavior in both humans and machines. Planning, however, is computationally hard as the consideration of all possible courses of action is not computationally feasible. The problem has been addressed by research in Artificial Intelligence that in recent years has uncovered simple but powerful computational principles that make planning feasible. The principles take the form of domain-independent methods for computing heuristics or appraisals that enable the effective generation of goal-directed behavior even over huge spaces. In this paper, we look at several planning models, at methods that have been shown to scale up to large problems, and at what these methods may suggest about the human mind. WIREs Cogn Sci 2013, 4:341-356. doi: 10.1002/wcs.1233 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Computer Modeling Of Atomization

    NASA Technical Reports Server (NTRS)

    Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.

    1994-01-01

    Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.

  10. Understanding student computational thinking with computational modeling

    NASA Astrophysics Data System (ADS)

    Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.

    2013-01-01

    Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.

  11. Computer modeling of polymers

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1988-01-01

    A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.

  12. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  13. Computer modeling of photodegradation

    NASA Technical Reports Server (NTRS)

    Guillet, J.

    1986-01-01

    A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.

  14. Computer cast blast modelling

    SciTech Connect

    Chung, S.; McGill, M.; Preece, D.S.

    1994-07-01

    Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.

  15. Computer modeling of Epilepsy

    PubMed Central

    Lytton, William W.

    2009-01-01

    Preface Epilepsy is a complex set of disorders that can involve many areas of cortex as well as underlying deep brain systems. The myriad manifestations of seizures, as varied as déjà vu and olfactory hallucination, can thereby give researchers insights into regional functions and relations. Epilepsy is also complex genetically and pathophysiologically, involving microscopic (ion channels, synaptic proteins), macroscopic (brain trauma and rewiring) and intermediate changes in a complex interplay of causality. It has long been recognized that computer modeling will be required to disentangle causality, to better understand seizure spread and to understand and eventually predict treatment efficacy. Over the past few years, substantial progress has been made modeling epilepsy at levels ranging from the molecular to the socioeconomic. We review these efforts and connect them to the medical goals of understanding and treating this disorder. PMID:18594562

  16. Computational Modeling Program

    NASA Technical Reports Server (NTRS)

    Govindan, T. R.; Davis, Robert J.

    1998-01-01

    An Integrated Product Team (IPT) has been formed at NASA Ames Research Center which has set objectives to investigate devices and processes suitable for meeting NASA requirements on ultrahigh performance computers, fast and low power devices, and high temperature wide bandgap materials. These devices may ultimately be sub-100nm feature-size. Processes and equipment must meet the stringent demands posed by the fabrication of such small devices. Until now, the reactors for Chemical Vapor Deposition (CVD) and plasma processes have been designed by trial and error procedures. Further, once the reactor is in place, optimum processing parameters are found through expensive and time-consuming experimentation. If reliable models are available that describe processes and the operation of the reactors, that chore would be reduced to a routine task while being a cost-effective option. The goal is to develop such a design tool, validate that tool using available data from current generation processes and reactors, and then use that tool to explore avenues for meeting NASA needs for ultrasmall device fabrication. Under the present grant, ARL/Penn State along with other IPT members has been developing models and computer code to meet IPT goals. Some of the accomplishments achieved during the first year of the grant are described in this report

  17. Computational modelling of polymers

    NASA Technical Reports Server (NTRS)

    Celarier, Edward A.

    1991-01-01

    Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.

  18. Models of optical quantum computing

    NASA Astrophysics Data System (ADS)

    Krovi, Hari

    2017-03-01

    I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.

  19. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  20. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  1. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  2. Computational model for interfaces

    NASA Astrophysics Data System (ADS)

    Glimm, J.; McBryan, O. A.

    1985-06-01

    Decompositions of the plane into disjoint components separated by curves occur frequently. We describe a package of subroutines which provides facilities for defining, building and modifying such decompositions and for efficiently solving various point and area location problems. Beyond the point that the specification of this package may be useful to others, we reach the broader conclusion that well designed data structures and support routines allow the use of more conceptual or nonnumerical portions of mathematics in the computational process, thereby extending greatly the potential scope of the use of computers in scientific problem solving. Ideas from conceptual mathematics, symbolic computation and computer science can be utilized within the framework of scientific computing and have an important role to play in that area.

  3. Mathematical Modeling and Computational Thinking

    ERIC Educational Resources Information Center

    Sanford, John F.; Naidu, Jaideep T.

    2017-01-01

    The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…

  4. Computation models of discourse

    SciTech Connect

    Brady, M.; Berwick, R.C.

    1983-01-01

    This book presents papers on artificial intelligence and natural language. Topics considered include recognizing intentions from natural language utterances, cooperative responses from a portable natural language database query system, natural language generation as a computational problem, focusing in the comprehension of definite anaphora, and factors in forming discourse-dependent descriptions.

  5. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  6. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  7. Computational models of syntactic acquisition.

    PubMed

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  8. Computational Modeling of Space Physiology

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Griffin, Devon W.

    2016-01-01

    The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.

  9. toolkit computational mesh conceptual model.

    SciTech Connect

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  10. Fast Computation of CMH Model

    NASA Technical Reports Server (NTRS)

    Patel, Umesh D.; DellaTorre, Edward; Day, John H. (Technical Monitor)

    2001-01-01

    A fast differential equation approach for the DOK model has been extented to the CMH model. Also, a cobweb technique for calculating the CMH model is also presented. The two techniques are contrasted from the point of view of flexibility and computation time.

  11. FNAS computational modeling

    NASA Technical Reports Server (NTRS)

    Franz, J. R.

    1993-01-01

    Numerical calculations of the electronic properties of liquid II-VI semiconductors, particularly CdTe and ZnTe were performed. The measured conductivity of these liquid alloys were modeled by assuming that the dominant temperature effect is the increase in the number of dangling bonds with increasing temperature. For low to moderate values of electron correlation, the calculated conductivity as a function of dangling bond concentration closely follows the measured conductivity as a function of temperature. Both the temperature dependence of the chemical potential and the thermal smearing in region of the Fermi surface have a large effect on calculated values of conductivity.

  12. Computer Security Models

    DTIC Science & Technology

    1984-09-01

    database data model. .- .- Brat it lea •.•<:’ - SUBJECTS: People, devices, processes, etc. i-’-" L" ". - OBJECTS: Data stored in the database and...uniquely identify data granules. It is not necessarily stored in the database with the data granule, but is either a tuple identifier (tid) in relational...the database management system. The user who first stores the data is allowed to retrieve or modify that data in case there are inputting errors. Add

  13. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  14. Component Breakout Computer Model

    DTIC Science & Technology

    1987-04-29

    Weapon Systems: A Policy Analysis." The Rand Graduate Institute. November 1983. Boger . D. "Statistical Models for Estimating Overhead Costs." M. S...SQUARE SCREEN PROGRAM BO DLS 70 LOCATE 3,5 100 PRINT " I I I I I I I I I I I I I I t I I I t I I i iiitiii I I I I i t I i 110 LOCATE 4,5 I 20...GOTO 4620 4610 REM ***********«««*«««**#«***********#******»,*###!^5|[^,„<c#,5|c„ dl -r C^M EED SUPPORT .c.50 REM A6(6)...N0 OF EMPLOYEES 4660 IF

  15. Ch. 33 Modeling: Computational Thermodynamics

    SciTech Connect

    Besmann, Theodore M

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  16. Efficient Computational Model of Hysteresis

    NASA Technical Reports Server (NTRS)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  17. Computational Modeling of Multiphase Reactors.

    PubMed

    Joshi, J B; Nandakumar, K

    2015-01-01

    Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling.

  18. Computational models of adult neurogenesis

    NASA Astrophysics Data System (ADS)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  19. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  20. Climate Modeling Computing Needs Assessment

    NASA Astrophysics Data System (ADS)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  1. The Egg Series: Using Simple Computer Models

    ERIC Educational Resources Information Center

    Statz, Joyce; Miller, Leland

    1978-01-01

    A model computer which uses a common egg carton to represent computer memory is suggested for giving students a basic understanding of a computer's organization. Several programs using the model are included. (MN)

  2. Computational Modeling of Mitochondrial Function

    PubMed Central

    Cortassa, Sonia; Aon, Miguel A.

    2012-01-01

    The advent of techniques with the ability to scan massive changes in cellular makeup (genomics, proteomics, etc.) has revealed the compelling need for analytical methods to interpret and make sense of those changes. Computational models built on sound physico-chemical mechanistic basis are unavoidable at the time of integrating, interpreting, and simulating high-throughput experimental data. Another powerful role of computational models is predicting new behavior provided they are adequately validated. Mitochondrial energy transduction has been traditionally studied with thermodynamic models. More recently, kinetic or thermo-kinetic models have been proposed, leading the path toward an understanding of the control and regulation of mitochondrial energy metabolism and its interaction with cytoplasmic and other compartments. In this work, we outline the methods, step-by-step, that should be followed to build a computational model of mitochondrial energetics in isolation or integrated to a network of cellular processes. Depending on the question addressed by the modeler, the methodology explained herein can be applied with different levels of detail, from the mitochondrial energy producing machinery in a network of cellular processes to the dynamics of a single enzyme during its catalytic cycle. PMID:22057575

  3. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  4. Neurometric Modeling: Computational Modeling of Individual Brains

    DTIC Science & Technology

    2011-05-16

    Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Neural networks, computational neuroscience, fMRI ...obtained using functional MRI. Algorithmic processing of these measurements can exploit a variety of statistical machine learning methods to... statistical machine learning methods to synthesize a new kind of neuro-cognitive model, which we call neurometric models. These executable models could be

  5. Computational Modeling for Bedside Application

    PubMed Central

    Kerckhoffs, Roy C.P.; Narayan, Sanjiv M.; Omens, Jeffrey H.; Mulligan, Lawrence J.; McCulloch, Andrew D.

    2008-01-01

    With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988

  6. Computational Model for Cell Morphodynamics

    NASA Astrophysics Data System (ADS)

    Shao, Danying; Rappel, Wouter-Jan; Levine, Herbert

    2010-09-01

    We develop a computational model, based on the phase-field method, for cell morphodynamics and apply it to fish keratocytes. Our model incorporates the membrane bending force and the surface tension and enforces a constant area. Furthermore, it implements a cross-linked actin filament field and an actin bundle field that are responsible for the protrusion and retraction forces, respectively. We show that our model predicts steady state cell shapes with a wide range of aspect ratios, depending on system parameters. Furthermore, we find that the dependence of the cell speed on this aspect ratio matches experimentally observed data.

  7. Modelling global computations with KLAIM.

    PubMed

    De Nicola, Rocco; Loreti, Michele

    2008-10-28

    A new area of research, known as Global Computing, is by now well established. It aims at defining new models of computation based on code and data mobility over wide-area networks with highly dynamic topologies, and at providing infrastructures to support coordination and control of components originating from different, possibly untrusted, fault-prone, malicious or selfish sources. In this paper, we present our contribution to the field of Global Computing that is centred on Kernel Language for Agents Interaction and Mobility (KLAIM). KLAIM is an experimental language specifically designed to programme distributed systems consisting of several mobile components that interact through multiple distributed tuple spaces. We present some of the key notions of the language and discuss how its formal semantics can be exploited to reason about qualitative and quantitative aspects of the specified systems.

  8. Visualizing ultrasound through computational modeling

    NASA Technical Reports Server (NTRS)

    Guo, Theresa W.

    2004-01-01

    The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.

  9. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  10. Cosmic logic: a computational model

    SciTech Connect

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  11. Minimal Models of Multidimensional Computations

    PubMed Central

    Fitzgerald, Jeffrey D.; Sincich, Lawrence C.; Sharpee, Tatyana O.

    2011-01-01

    The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs. PMID:21455284

  12. Computational Models of Rock Failure

    NASA Astrophysics Data System (ADS)

    May, Dave A.; Spiegelman, Marc

    2017-04-01

    Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of

  13. Computational Models of Cognitive Control

    PubMed Central

    O’Reilly, Randall C.; Herd, Seth A.; Pauli, Wolfgang M.

    2010-01-01

    Cognitive control refers to the ability to perform task-relevant processing in the face of other distractions or other forms of interference, in the absence of strong environmental support. It depends on the integrity of the prefrontal cortex and associated biological structures (e.g., the basal ganglia). Computational models have played an influential role in developing our understanding of this system, and we review current developments in three major areas: dynamic gating of prefrontal representations, hierarchies in the prefrontal cortex, and reward, motivation, and goal-related processing in prefrontal cortex. Models in these and other areas are advancing the field further forward. PMID:20185294

  14. Transparency of Computational Intelligence Models

    NASA Astrophysics Data System (ADS)

    Owotoki, Peter; Mayer-Lindenberg, Friedrich

    This paper introduces the behaviour of transparency of computational intelligence (CI) models. Transparency reveals to end users the underlying reasoning process of the agent embodying CI models. This is of great benefit in applications (e.g. data mining, entertainment and personal robotics) with humans as end users because it increases their trust in the decisions of the agent and their acceptance of its results. Our integrated approach, wherein rules are just one of other transparency factors (TF), differs from previous related efforts which have focused mostly on generation of comprehensible rules as explanations. Other TF include degree of confidence measure and visualization of principal features. The transparency quotient is introduced as a measure of the transparency of models based on these factors. The transparency enabled generalized exemplar model has been developed to demonstrate the TF and transparency concepts introduced in this paper.

  15. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)

    1994-01-01

    The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.

  16. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    PubMed

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-07

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.

  17. Computational models and resource allocation for supercomputers

    NASA Technical Reports Server (NTRS)

    Mauney, Jon; Agrawal, Dharma P.; Harcourt, Edwin A.; Choe, Young K.; Kim, Sukil

    1989-01-01

    There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments. Some case studies are presented, showing concrete computational models and the allocation strategies used.

  18. Computer modeling of piezoresistive gauges

    SciTech Connect

    Nutt, G. L.; Hallquist, J. O.

    1981-08-07

    A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medum subjected to a 0.5 Gp plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 2.7 Gp shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shocked medium.

  19. Computer modeling of piezoresistive gauges

    SciTech Connect

    Nutt, G.L.; Hallquist, J.O.

    1981-08-07

    A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medium subjected to a 5.0 kbar plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 27.5 kbar shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shock medium.

  20. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  1. Computer models for economic and silvicultural decisions

    Treesearch

    Rosalie J. Ingram

    1989-01-01

    Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.

  2. Computational modeling of epithelial tissues.

    PubMed

    Smallwood, Rod

    2009-01-01

    There is an extensive literature on the computational modeling of epithelial tissues at all levels from subcellular to whole tissue. This review concentrates on behavior at the individual cell to whole tissue level, and particularly on organizational aspects, and provides an indication of where information from other areas, such as the modeling of angiogenesis, is relevant. The skin, and the lining of all of the body cavities (lung, gut, cervix, bladder etc) are epithelial tissues, which in a topological sense are the boundary between inside and outside the body. They are thin sheets of cells (usually of the order of 0.5 mm thick) without extracellular matrix, have a relatively simple structure, and contain few types of cells. They have important barrier, secretory and transport functions, which are essential for the maintenance of life, so homeostasis and wound healing are important aspects of the behavior of epithelial tissues. Carcinomas originate in epithelial tissues.There are essentially two approaches to modeling tissues--to start at the level of the tissue (i.e., a length scale of the order of 1 mm) and develop generalized equations for behavior (a continuum approach); or to start at the level of the cell (i.e., a length scale of the order of 10 µm) and develop tissue behavior as an emergent property of cellular behavior (an individual-based approach). As will be seen, these are not mutually exclusive approaches, and they come in a variety of flavors.

  3. Computational modeling of soot nucleation

    NASA Astrophysics Data System (ADS)

    Chung, Seung-Hyun

    Recent studies indicate that soot is the second most significant driver of climate change---behind CO2, but ahead of methane---and increased levels of soot particles in the air are linked to health hazards such as heart disease and lung cancer. Within the soot formation process, soot nucleation is the least understood step, and current experimental findings are still limited. This thesis presents computational modeling studies of the major pathways of the soot nucleation process. In this study, two regimes of soot nucleation---chemical growth and physical agglomeration---were evaluated and the results demonstrated that combustion conditions determine the relative importance of these two routes. Also, the dimerization process of polycyclic aromatic hydrocarbons, which has been regarded as one of the most important physical agglomeration processes in soot formation, was carefully examined with a new method for obtaining the nucleation rate using molecular dynamics simulation. The results indicate that the role of pyrene dimerization, which is the commonly accepted model, is expected to be highly dependent on various flame temperature conditions and may not be a key step in the soot nucleation process. An additional pathway, coronene dimerization in this case, needed to be included to improve the match with experimental data. The results of this thesis provide insight on the soot nucleation process and can be utilized to improve current soot formation models.

  4. Model dynamics for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank

    2017-08-01

    A model master equation suitable for quantum computing dynamics is presented. In an ideal quantum computer (QC), a system of qubits evolves in time unitarily and, by virtue of their entanglement, interfere quantum mechanically to solve otherwise intractable problems. In the real situation, a QC is subject to decoherence and attenuation effects due to interaction with an environment and with possible short-term random disturbances and gate deficiencies. The stability of a QC under such attacks is a key issue for the development of realistic devices. We assume that the influence of the environment can be incorporated by a master equation that includes unitary evolution with gates, supplemented by a Lindblad term. Lindblad operators of various types are explored; namely, steady, pulsed, gate friction, and measurement operators. In the master equation, we use the Lindblad term to describe short time intrusions by random Lindblad pulses. The phenomenological master equation is then extended to include a nonlinear Beretta term that describes the evolution of a closed system with increasing entropy. An external Bath environment is stipulated by a fixed temperature in two different ways. Here we explore the case of a simple one-qubit system in preparation for generalization to multi-qubit, qutrit and hybrid qubit-qutrit systems. This model master equation can be used to test the stability of memory and the efficacy of quantum gates. The properties of such hybrid master equations are explored, with emphasis on the role of thermal equilibrium and entropy constraints. Several significant properties of time-dependent qubit evolution are revealed by this simple study.

  5. Microcephaly: computational and organotypic modeling of a ...

    EPA Pesticide Factsheets

    lecture discusses computational and organotypic models of microcephaly in an AOP Framework and ToxCast assays. Lecture slide presentation at UNC Chapel Hill for Advanced Toxicology course lecture on Computational Approaches to Developmental and Reproductive Toxicology with presentation on computational and organotypic modeling of a complex human birth defect microcephaly with is associated with the recent Zika virus outbreak.

  6. Computational modeling of membrane proteins

    PubMed Central

    Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.

    2014-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688

  7. Cupola Furnace Computer Process Model

    SciTech Connect

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  8. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  9. High performance computing for materials process modeling

    SciTech Connect

    Zacharia, T.; Bjerke, M.A.; Simunovic, S.

    1993-12-31

    Advanced mathematical techniques and computer simulation play a major role in providing enhanced understanding of conventional materials processing operations such as welding and joining. Many of these numerical models are highly compute-intensive. It is not unusual for an analysis to require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. This paper briefly describes massively parallel computational research at the ORNL with the objective of providing fundamental insight into the welding process.

  10. The fermilab central computing facility architectural model

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1989-12-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing enviroment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a LargeScale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM computing engine, ACP farms, and (primary) VMS workstations. This paper will discuss the implemetation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab.

  11. Quantum vertex model for reversible classical computing

    NASA Astrophysics Data System (ADS)

    Chamon, C.; Mucciolo, E. R.; Ruckenstein, A. E.; Yang, Z.-C.

    2017-05-01

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without `learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  12. Quantum vertex model for reversible classical computing.

    PubMed

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  13. Quantum vertex model for reversible classical computing

    PubMed Central

    Chamon, C.; Mucciolo, E. R.; Ruckenstein, A. E.; Yang, Z.-C.

    2017-01-01

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without ‘learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing. PMID:28497790

  14. The simulation model of the computer cluster

    NASA Astrophysics Data System (ADS)

    Sokolova, V. V.; Zamyatina, O. M.

    2017-01-01

    Simulation is often used in cases when it is impossible to carry out experiments with real complex objects. The article represents the description of the computer cluster simulation model. Parameters, which affect the cluster performance, were selected, a simulation model was designed, and experiments were conducted. The obtained model allowed finding the optimal variant of the cluster performance, which consists of five computers.

  15. Fast Computation of the Inverse CMH Model

    NASA Technical Reports Server (NTRS)

    Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)

    2001-01-01

    A fast computational method based on differential equation approach for inverse DOK model has been extended for the inverse CMH model. Also, a cobweb technique for calculating the inverse CMH model is also presented. The two techniques are differed from the point of view of flexibility and computation time.

  16. Computer Modeling of a Fusion Plasma

    SciTech Connect

    Cohen, B I

    2000-12-15

    Progress in the study of plasma physics and controlled fusion has been profoundly influenced by dramatic increases in computing capability. Computational plasma physics has become an equal partner with experiment and traditional theory. This presentation illustrates some of the progress in computer modeling of plasma physics and controlled fusion.

  17. A computational model for feature binding.

    PubMed

    Shi, ZhiWei; Shi, ZhongZhi; Liu, Xi; Shi, ZhiPing

    2008-05-01

    The "Binding Problem" is an important problem across many disciplines, including psychology, neuroscience, computational modeling, and even philosophy. In this work, we proposed a novel computational model, Bayesian Linking Field Model, for feature binding in visual perception, by combining the idea of noisy neuron model, Bayesian method, Linking Field Network and competitive mechanism. Simulation Experiments demonstrated that our model perfectly fulfilled the task of feature binding in visual perception and provided us some enlightening idea for future research.

  18. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  19. Predictive Models and Computational Toxicology

    EPA Science Inventory

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...

  20. Predictive Models and Computational Toxicology

    EPA Science Inventory

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...

  1. Connectionist Models for Intelligent Computation.

    DTIC Science & Technology

    1988-08-31

    Studies and Department of Physics and Astronomy and Institute for Advanced Computer Studies TInivpr-%tv of Maryland College Park, MD 20742 ABSTRACT A...distributed in the network. II. TRAINING OF THE NETWORK The stereo vision is achieved by detecting the binocular disparity of the two images observed by...SUN, Y.C. LEE and H.H. CHEN oli toSios, d Department of Physics and Astronomy SO and ent tio: Institute for Advanced Computer Studies inhowve

  2. Computational modelling of dump combustors flowfield

    NASA Technical Reports Server (NTRS)

    Lentini, D.; Jones, W. P.

    1991-01-01

    A computational model aimed at predicting the flowfield of dump combustors is presented. The turbulent combustion model is based on the conserved scalar approach and on a convenient specification of its probability density function, which reduces the computation of the mean density to a closed form. Turbulence is modeled by means of the k-epsilon model. The averaged conservation equations are solved by a technique based on a staggered grid and on the SIMPLE solver. The computational model is applied to a simple dump combustor to assess the computer time requirements and accuracy. The turbulent combustion model is shown to reduce the computer time by an order of magnitude when compared to evaluating the mean density by numerical quadrature.

  3. Computational models and resource allocation for supercomputers

    SciTech Connect

    Mauney, J.; Harcourt, E.A. ); Agrawal, D.P. . Dept. of Electrical and Computer Engineering); Choe, Y.K. ); Kim, S. ); Staats, W.J. )

    1989-12-01

    Supercomputers are capable of providing tremendous computational power, but must be carefully programmed to take advantage of that power. There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments.

  4. Climate Ocean Modeling on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Wang, P.; Cheng, B. N.; Chao, Y.

    1998-01-01

    Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.

  5. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  6. Applications of computer modeling to fusion research

    SciTech Connect

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

  7. Leverage points in a computer model

    NASA Astrophysics Data System (ADS)

    Janošek, Michal

    2016-06-01

    This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.

  8. Model Railroading and Computer Fundamentals

    ERIC Educational Resources Information Center

    McCormick, John W.

    2007-01-01

    Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…

  9. Model Railroading and Computer Fundamentals

    ERIC Educational Resources Information Center

    McCormick, John W.

    2007-01-01

    Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…

  10. Computational modeling of peripheral pain: a commentary.

    PubMed

    Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S

    2015-06-11

    This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.

  11. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  12. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  13. Geometric Modeling for Computer Vision

    DTIC Science & Technology

    1974-10-01

    Vision and Artificial Intellegence could lead to robots, androids and cyborgs which will be able to see, to think and to feel conscious 10.4...the construction of computer representations of physical objects, cameras, images and light for the sake of simulating their behavior. In Artificial ...specifically, I wish to exclude the connotation that the theory is a natural theory of vision. Perhaps there can be such a thing as an artificial theory

  14. Computational Model for Armor Penetration

    DTIC Science & Technology

    1987-10-01

    the penetration calculation with a slide line in the target, the impact velocity was artificially raised to avoid impact of the projectile sides onto...Lagrangian equations governing motion of a continuous medium. The solution technique is called the method of artificial viscosity because of the...fronts, although no discontinuities occur in the computed flow field. With this artificial viscosity method, the equations of continuous flow can be

  15. Computational Modeling of Supercritical and Transcritical Flows

    DTIC Science & Technology

    2017-01-11

    19b. TELEPHONE NUMBER (Include area code) 01/11/2017 Briefing Charts 01 January 2017 - 31 January 2017 Computational Modeling of Supercritical and...Distribution Unlimited. PA Clearance 17031 Computational Modeling of Supercritical and Transcritical Flows Matthew Harvazinski1, Guilhem Lacaze2, Joseph...Products T >> Tc Significant problems trying to model sub-scale liquid rocket engine (LRE) injector experiments because of real gas effects

  16. Introducing Seismic Tomography with Computational Modeling

    NASA Astrophysics Data System (ADS)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  17. A frequentist approach to computer model calibration

    SciTech Connect

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.

  18. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  19. Computational and dynamic models in neuroimaging

    PubMed Central

    Friston, Karl J.; Dolan, Raymond J.

    2010-01-01

    This article reviews the substantial impact computational neuroscience has had on neuroimaging over the past years. It builds on the distinction between models of the brain as a computational machine and computational models of neuronal dynamics per se; i.e., models of brain function and biophysics. Both sorts of model borrow heavily from computational neuroscience, and both have enriched the analysis of neuroimaging data and the type of questions we address. To illustrate the role of functional models in imaging neuroscience, we focus on optimal control and decision (game) theory; the models used here provide a mechanistic account of neuronal computations and the latent (mental) states represent by the brain. In terms of biophysical modelling, we focus on dynamic causal modelling, with a special emphasis on recent advances in neural-mass models for hemodynamic and electrophysiological time series. Each example emphasises the role of generative models, which embed our hypotheses or questions, and the importance of model comparison (i.e., hypothesis testing). We will refer to this theme, when trying to contextualise recent trends in relation to each other. PMID:20036335

  20. Ranked retrieval of Computational Biology models

    PubMed Central

    2010-01-01

    Background The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Results Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. Conclusions The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models. PMID:20701772

  1. Computational Model for Corneal Transplantation

    NASA Astrophysics Data System (ADS)

    Cabrera, Delia

    2003-10-01

    We evaluated the refractive consequences of corneal transplants using a biomechanical model with homogeneous and inhomogeneous Young's modulus distributions within the cornea, taking into account ablation of some stromal tissue. A FEM model was used to simulate corneal transplants in diseased cornea. The diseased cornea was modeled as an axisymmetric structure taking into account a nonlinearly elastic, isotropic formulation. The model simulating the penetrating keratoplasty procedure gives more change in the postoperative corneal curvature when compared to the models simulating the anterior and posterior lamellar graft procedures. When a lenticle shaped tissue was ablated in the graft during the anterior and posterior keratoplasty, the models provided an additional correction of about -3.85 and -4.45 diopters, respectively. Despite the controversy around the corneal thinning disorders treatment with volume removal procedures, results indicate that significant changes in corneal refractive power could be introduced by a corneal transplantation combined with myopic laser ablation.

  2. Computational Model Optimization for Enzyme Design Applications

    DTIC Science & Technology

    2007-11-02

    naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.

  3. Computer modeling of human decision making

    NASA Technical Reports Server (NTRS)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  4. A new epidemic model of computer viruses

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  5. Applications of Computational Modeling in Cardiac Surgery

    PubMed Central

    Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.

    2014-01-01

    Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036

  6. Computational modeling of ultraviolet disinfection.

    PubMed

    Younis, B A; Yang, T H

    2010-01-01

    The efficient design of ultraviolet light (UV) systems for water and wastewater treatment requires detailed knowledge of the patterns of fluid motion that occur in the disinfection channel. This knowledge is increasingly being obtained using Computational Fluid Dynamics (CFD) software packages that solve the equations governing turbulent fluid-flow motion. In this work, we present predictions of the patterns of flow and the extent of disinfection in a conventional reactor consisting of an open channel with an array of UV lamps placed with their axes perpendicular to the direction of flow. It is shown that the resulting flow is inherently unsteady due to the regular shedding of vortices from the submerged lamps. It is also shown that the accurate prediction of the hydraulic residence time and, consequently, the extent of disinfection is strongly dependent on the ability of the CFD method to capture the occurrence and strength of the vortex shedding, and its effects on the turbulent mixing processes.

  7. Enhanced absorption cycle computer model

    NASA Astrophysics Data System (ADS)

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.

  8. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  9. Building a Computable Facility Model

    DTIC Science & Technology

    2002-10-01

    Building Composer; facility design; facility management; Fort Future; decision support tools; installation design; integrated software; simulation ... modeling 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 4 19. NAME OF RESPONSIBLE PERSON Wolfe

  10. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  11. Computer-Based Modeling Environments

    DTIC Science & Technology

    1988-12-01

    and Kernighan 򒾃>), CAMPS (Lucas and Mitra 򒾁>), GAMS (Bisschop and Meeraus 򒽾>), LINGO (Cunningham and Schrage 򒾄>), LPL (Hurlimann and...times; and Vo 򒾁>, which describes the integration approach used by a UNIX -based analytical modeling environment at AT&T Bell Laboratories called...platform such as UNIX , as ANALYTICOL does (Childs and Meacham 򒾁>). Or one might build a modeling environment around a suitable, and probably relational

  12. Computer Model Locates Environmental Hazards

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  13. Computer Modeling of Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Hashim, Rauzah

    This chapter outlines the methodologies and models which are commonly used in the simulation of liquid crystals. The approach in the simulation of liquid crystals has always been to understand the nature of the phase and to relate this to fundamental molecular features such as geometry and intermolecular forces, before important properties related to certain applications are elucidated. Hence, preceding the description of the main "molecular-based" models for liquid crystals, a general but brief outline of the nature of liquid crystals and their historical development is given. Three main model classes, namely the coarse-grained single-site lattice and Gay-Berne models and the full atomistic model will be described here where for each a brief review will be given followed by assessment of its application in describing the phase phenomena with an emphasis on understanding the molecular organization in liquid crystal phases and the prediction of their bulk properties. Variants and hybrid models derived from these classes and their applications are given.

  14. Computational Viscoplasticity Based on Overstress (CVBO) Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zheng; Ruggles-wrenn, Marina; Fish, Jacob

    2014-03-01

    This article presents an efficient computational viscoplasticity based on an overstress (CVBO) model, including three-dimensional formulation, implicit stress update procedures, consistent tangent, and systematic calibration of the model parameters to experimental data. The model has been validated for PMR 15 neat resin, including temperature and aging dependence.

  15. Comprehensive silicon solar-cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    A comprehensive silicon solar cell computer modeling scheme was developed to perform the following tasks: (1) model and analysis of the net charge distribution in quasineutral regions; (2) experimentally determined temperature behavior of Spire Corp. n+pp+ solar cells where n+-emitter is formed by ion implantation of 75As or 31P; and (3) initial validation results of computer simulation program using Spire Corp. n+pp+ cells.

  16. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  17. Parallel computing in atmospheric chemistry models

    SciTech Connect

    Rotman, D.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  18. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  19. Computer Performance Modeling Tool (CPMT).

    DTIC Science & Technology

    1984-12-01

    83 A. TEST MODEL #1 . . . . . . . . . . . . . . . . 83 B. TESI MODEl #2 . . . . . . . . . . . . . . . . 86 C. HYPOTHESIS TESTING OF RESPONSE...which is updated by the CPMT Update Module. RECFILE. LAT file maintenance and organization is described in Chapter 5. MESSAGES.DAT is a sequential file...I- LUW - m~ Z04 0D Cf. L QOCC8.4Z =-) 1 0 1-1- "W ) 44 -0" O8. (J8. P" A XwOM"".-.- ZZ--oc -1 ac ~ e *W I. 9 .I I LJB-s 0 V Z1 - - LUU* 0 1 w- laT -1

  20. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  1. Economic Analysis. Computer Simulation Models.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…

  2. Computational study of lattice models

    NASA Astrophysics Data System (ADS)

    Zujev, Aleksander

    This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.

  3. Computational modeling of peptide-aptamer binding.

    PubMed

    Rhinehardt, Kristen L; Mohan, Ram V; Srinivas, Goundla

    2015-01-01

    Evolution is the progressive process that holds each living creature in its grasp. From strands of DNA evolution shapes life with response to our ever-changing environment and time. It is the continued study of this most primitive process that has led to the advancement of modern biology. The success and failure in the reading, processing, replication, and expression of genetic code and its resulting biomolecules keep the delicate balance of life. Investigations into these fundamental processes continue to make headlines as science continues to explore smaller scale interactions with increasing complexity. New applications and advanced understanding of DNA, RNA, peptides, and proteins are pushing technology and science forward and together. Today the addition of computers and advances in science has led to the fields of computational biology and chemistry. Through these computational advances it is now possible not only to quantify the end results but also visualize, analyze, and fully understand mechanisms by gaining deeper insights. The biomolecular motion that exists governing the physical and chemical phenomena can now be analyzed with the advent of computational modeling. Ever-increasing computational power combined with efficient algorithms and components are further expanding the fidelity and scope of such modeling and simulations. This chapter discusses computational methods that apply biological processes, in particular computational modeling of peptide-aptamer binding.

  4. Evaluation and Comparison of Computational Models

    PubMed Central

    Myung, Jay; Tang, Yun; Pitt, Mark A.

    2009-01-01

    Computational models are powerful tools that can enhance the understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying the adequacy of a model, and possibly its superiority to other models, are understood. This chapter begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. The final section demonstrates the use of five versatile and easy-to-use selection methods for choosing between two mathematical models of protein folding. PMID:19216931

  5. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  6. Mechanistic models in computational social science

    NASA Astrophysics Data System (ADS)

    Holme, Petter; Liljeros, Fredrik

    2015-09-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  7. A computational model of the cerebellum

    SciTech Connect

    Travis, B.J.

    1990-01-01

    The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.

  8. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  9. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  10. Toward a Computational Model of Tutoring.

    ERIC Educational Resources Information Center

    Woolf, Beverly Park

    1992-01-01

    Discusses the integration of instructional science and computer science. Topics addressed include motivation for building knowledge-based systems; instructional design issues, including cognitive models, representing student intentions, and student models and error diagnosis; representing tutoring knowledge; building a tutoring system, including…

  11. Modeling User Behavior in Computer Learning Tasks.

    ERIC Educational Resources Information Center

    Mantei, Marilyn M.

    Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…

  12. Computer Modeling and Visualization in Design Technology: An Instructional Model.

    ERIC Educational Resources Information Center

    Guidera, Stan

    2002-01-01

    Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…

  13. Do's and Don'ts of Computer Models for Planning

    ERIC Educational Resources Information Center

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  14. Do's and Don'ts of Computer Models for Planning

    ERIC Educational Resources Information Center

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  15. Models of neuromodulation for computational psychiatry.

    PubMed

    Iglesias, Sandra; Tomiello, Sara; Schneebeli, Maya; Stephan, Klaas E

    2017-05-01

    Psychiatry faces fundamental challenges: based on a syndrome-based nosology, it presently lacks clinical tests to infer on disease processes that cause symptoms of individual patients and must resort to trial-and-error treatment strategies. These challenges have fueled the recent emergence of a novel field-computational psychiatry-that strives for mathematical models of disease processes at physiological and computational (information processing) levels. This review is motivated by one particular goal of computational psychiatry: the development of 'computational assays' that can be applied to behavioral or neuroimaging data from individual patients and support differential diagnosis and guiding patient-specific treatment. Because the majority of available pharmacotherapeutic approaches in psychiatry target neuromodulatory transmitters, models that infer (patho)physiological and (patho)computational actions of different neuromodulatory transmitters are of central interest for computational psychiatry. This article reviews the (many) outstanding questions on the computational roles of neuromodulators (dopamine, acetylcholine, serotonin, and noradrenaline), outlines available evidence, and discusses promises and pitfalls in translating these findings to clinical applications. WIREs Cogn Sci 2017, 8:e1420. doi: 10.1002/wcs.1420 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  16. CDF computing and event data models

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-12-01

    The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

  17. EWE: A computer model for ultrasonic inspection

    NASA Astrophysics Data System (ADS)

    Douglas, S. R.; Chaplin, K. R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It was applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues.

  18. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  19. Computational disease modeling – fact or fiction?

    PubMed Central

    Tegnér, Jesper N; Compte, Albert; Auffray, Charles; An, Gary; Cedersund, Gunnar; Clermont, Gilles; Gutkin, Boris; Oltvai, Zoltán N; Stephan, Klaas Enno; Thomas, Randy; Villoslada, Pablo

    2009-01-01

    Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems. PMID:19497118

  20. An improved computational constitutive model for glass

    NASA Astrophysics Data System (ADS)

    Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.

    2017-01-01

    In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  1. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  2. A computational model of selection by consequences.

    PubMed Central

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512

  3. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  4. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  5. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  6. Empirical Movement Models for Brain Computer Interfaces.

    PubMed

    Matlack, Charles; Chizeck, Howard; Moritz, Chet T

    2016-06-30

    For brain-computer interfaces (BCIs) which provide the user continuous position control, there is little standardization of performance metrics or evaluative tasks. One candidate metric is Fitts's law, which has been used to describe aimed movements across a range of computer interfaces, and has recently been applied to BCI tasks. Reviewing selected studies, we identify two basic problems with Fitts's law: its predictive performance is fragile, and the estimation of 'information transfer rate' from the model is unsupported. Our main contribution is the adaptation and validation of an alternative model to Fitts's law in the BCI context. We show that the Shannon-Welford model outperforms Fitts's law, showing robust predictive power when target distance and width have disproportionate effects on difficulty. Building on a prior study of the Shannon-Welford model, we show that identified model parameters offer a novel approach to quantitatively assess the role of controldisplay gain in speed/accuracy performance tradeoffs during brain control.

  7. Computational Spectrum of Agent Model Simulation

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.

  8. Computing confidence intervals for point process models.

    PubMed

    Sarma, Sridevi V; Nguyen, David P; Czanner, Gabriela; Wirth, Sylvia; Wilson, Matthew A; Suzuki, Wendy; Brown, Emery N

    2011-11-01

    Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.

  9. Computational Modeling of Inflammation and Wound Healing

    PubMed Central

    Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram

    2013-01-01

    Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362

  10. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  11. Global detailed geoid computation and model analysis

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Vincent, S.

    1974-01-01

    Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.

  12. Utilizing computer models for optimizing classroom acoustics

    NASA Astrophysics Data System (ADS)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  13. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  14. Computer modeling of loudspeaker arrays in rooms

    NASA Astrophysics Data System (ADS)

    Schwenke, Roger

    2002-05-01

    Loudspeakers present a special challenge to computational modeling of rooms. When modeling a collection of noncorrelated sound sources, such as a group of musicians, coarse resolution power spectrum and directivities are sufficient. In contrast, a typical loudspeaker array consists of many speakers driven with the same signal, and are therefore almost completely correlated. This can lead to a quite complicated, but stable, pattern of spatial nulls and lobes which depends sensitively on frequency. It has been shown that, to model these interactions accurately, one must have loudspeaker data with 1 deg spatial resolution, 1/24 octave frequency resolution including phase. It will be shown that computer models at such a high resolution can in fact inform design decisions of loudspeaker arrays.

  15. Computational models for synthetic marine infrared clutter

    NASA Astrophysics Data System (ADS)

    Constantikes, Kim T.; Zysnarski, Adam H.

    1996-06-01

    The next generation of ship defense missiles will need to engage stealthy, passive, sea-skimming missiles. Detection and guidance will occur against a background of sea surface and horizon which can present significant clutter problems for infrared seekers, particularly when targets are comparatively dim. We need a variety of sea clutter models: statistical image models for signal processing algorithm design, clutter occurrence models for systems effectiveness assessment, and constructive image models for synthesizing very large field-of-view (FOV) images with high spatial and temporal resolution. We have implemented and tested such a constructive model. First principle models of water waves and light transport provide a computationally intensive clutter model implemented as a raytracer. Our models include sea, sky, and solar radiance; reflectance; attenuating atmospheres; constructive solid geometry targets; target and water wave dynamics; and simple sensor image formation.

  16. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  17. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  18. Applications of computational modeling in ballistics

    NASA Technical Reports Server (NTRS)

    Sturek, Walter B.

    1987-01-01

    The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.

  19. Optical Computing Based on Neuronal Models

    DTIC Science & Technology

    1988-05-01

    walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to

  20. A Computational Model of Fraction Arithmetic

    ERIC Educational Resources Information Center

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  1. Informing Mechanistic Toxicology with Computational Molecular Models

    EPA Science Inventory

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...

  2. Computer Modelling of Photochemical Smog Formation

    ERIC Educational Resources Information Center

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  3. Computer Modelling of Photochemical Smog Formation

    ERIC Educational Resources Information Center

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  4. Informing Mechanistic Toxicology with Computational Molecular Models

    EPA Science Inventory

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...

  5. Evaluating computational models of cholesterol metabolism.

    PubMed

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  6. A slab model for computing ground temperature in climate models

    NASA Technical Reports Server (NTRS)

    Lebedeff, S.; Crane, G.; Russell, G.

    1979-01-01

    A method is developed for computing the ground temperature accurately over both the diurnal and annual cycles. The ground is divided vertically into only two or three slabs, resulting in very efficient computation. Seasonal storage and release of heat is incorporated, and thus the method is well suited for use in climate models.

  7. Research and Development Project Prioritization - Computer Model

    DTIC Science & Technology

    1980-04-01

    ble pwm .-ezts or- for aggregvationr of noltinle criteri an-k ordered reqoirenments for- procdzcts. priorities. o) Reducd length lists (dowcn to C...Quantities of 50 and 51 respectively were reduced one each, without loss of generalization , to permit model computation. 69 -A- TABLE 5. (CONcLUDED) Case 10...strived examples from the literature. The model then was and generally failed to i6nd aggregation methods that demonstrated for an extensive R & D

  8. Testing computational toxicology models with phytochemicals.

    PubMed

    Valerio, Luis G; Arvidson, Kirk B; Busta, Emily; Minnier, Barbara L; Kruhlak, Naomi L; Benz, R Daniel

    2010-02-01

    Computational toxicology employing quantitative structure-activity relationship (QSAR) modeling is an evidence-based predictive method being evaluated by regulatory agencies for risk assessment and scientific decision support for toxicological endpoints of interest such as rodent carcinogenicity. Computational toxicology is being tested for its usefulness to support the safety assessment of drug-related substances (e.g. active pharmaceutical ingredients, metabolites, impurities), indirect food additives, and other applied uses of value for protecting public health including safety assessment of environmental chemicals. The specific use of QSAR as a chemoinformatic tool for estimating the rodent carcinogenic potential of phytochemicals present in botanicals, herbs, and natural dietary sources is investigated here by an external validation study, which is the most stringent scientific method of measuring predictive performance. The external validation statistics for predicting rodent carcinogenicity of 43 phytochemicals, using two computational software programs evaluated at the FDA, are discussed. One software program showed very good performance for predicting non-carcinogens (high specificity), but both exhibited poor performance in predicting carcinogens (sensitivity), which is consistent with the design of the models. When predictions were considered in combination with each other rather than based on any one software, the performance for sensitivity was enhanced, However, Chi-square values indicated that the overall predictive performance decreases when using the two computational programs with this particular data set. This study suggests that complementary multiple computational toxicology software need to be carefully selected to improve global QSAR predictions for this complex toxicological endpoint.

  9. Dealing with Diversity in Computational Cancer Modeling

    PubMed Central

    Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.

    2013-01-01

    This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360

  10. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  11. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  12. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  13. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  14. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  15. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  16. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  17. Biological Agent Neutralization/Computational Modeling Studies

    DTIC Science & Technology

    2010-09-01

    Computational Results • Solution of the Navier - Stokes equations for the flow inside the device has been obtained by the GASP solver. • 3rd order of...flkv:.lD ENTERPRISE I OFFICE I PHONE: Ali) -Eugene Stokes / 767-2826 / flL DATE: tt4 PUBLIC AFFAIRS: KJtJJ/,1.1/~A. !ffi / ~ ~ ~uwc:- DATE:’ ti{fr._(-u...to temperatures from 165C to 275C for times between 25ms and 100ms. The data was used to anchor computational fluid dynamics (CFD) flow modeling of

  18. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  19. Computational models of natural language processing

    SciTech Connect

    Bara, B.G.; Guida, G.

    1984-01-01

    The main concern in this work is the illustration of models for natural language processing, and the discussion of their role in the development of computational studies of language. Topics covered include the following: competence and performance in the design of natural language systems; planning and understanding speech acts by interpersonal games; a framework for integrating syntax and semantics; knowledge representation and natural language: extending the expressive power of proposition nodes; viewing parsing as word sense discrimination: a connectionist approach; a propositional language for text representation; from topic and focus of a sentence to linking in a text; language generation by computer; understanding the Chinese language; semantic primitives or meaning postulates: mental models of propositional representations; narrative complexity based on summarization algorithms; using focus to constrain language generation; and towards an integral model of language competence.

  20. A computational model of bleb formation

    PubMed Central

    Strychalski, Wanda; Guy, Robert D.

    2013-01-01

    Blebbing occurs when the cytoskeleton detaches from the cell membrane, resulting in the pressure-driven flow of cytosol towards the area of detachment and the local expansion of the cell membrane. Recent interest has focused on cells that use blebbing for migrating through 3D fibrous matrices. In particular, metastatic cancer cells have been shown to use blebs for motility. A dynamic computational model of the cell is presented that includes mechanics of and the interactions between the intracellular fluid, the actin cortex and the cell membrane. The computational model is used to explore the relative roles in bleb formation time of cytoplasmic viscosity and drag between the cortex and the cytosol. A regime of values for the drag coefficient and cytoplasmic viscosity values that match bleb formation timescales is presented. The model results are then used to predict the Darcy permeability and the volume fraction of the cortex. PMID:22294562

  1. A computational continuum model of poroelastic beds

    PubMed Central

    Zampogna, G. A.

    2017-01-01

    Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics. PMID:28413355

  2. A computational continuum model of poroelastic beds

    NASA Astrophysics Data System (ADS)

    Lācis, U.; Zampogna, G. A.; Bagheri, S.

    2017-03-01

    Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics.

  3. A computational continuum model of poroelastic beds.

    PubMed

    Lācis, U; Zampogna, G A; Bagheri, S

    2017-03-01

    Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics.

  4. Computational Modeling of Vortex Generators for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, R. V.

    2002-01-01

    In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.

  5. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  6. Concepts to accelerate water balance model computation

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Casper, Markus; Gemmar, Peter

    2010-05-01

    Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be

  7. Computational Study of a Primitive Life Model

    NASA Astrophysics Data System (ADS)

    Andrecut, Mircea

    We present a computational study of a primitive life model. The calculation involves a discrete treatment of a partial differential equation and some details of that problems are explained. We show that the investigated model is equivalent to a diffusively coupled logistic lattice. The bifurcation diagrams were calculated for different values of the control parameters. The obtained diagrams have shown that the time dependence of the population of the investigated model exhibits transitions between ordered and chaotic behavior. We have investigated also the patterns formation in this system.

  8. Computational modeling of foveal target detection.

    PubMed

    Witus, Gary; Ellis, R Darin

    2003-01-01

    This paper presents the VDM2000, a computational model of target detection designed for use in military developmental test and evaluation settings. The model integrates research results from the fields of early vision, object recognition, and psychophysics. The VDM2000 is image based and provides a criterion-independent measure of target conspicuity, referred to as the vehicle metric (VM). A large data set of human responses to photographs of military vehicles in a field setting was used to validate the model. The VM adjusted by a single calibration parameter accounts for approximately 80% of the variance in the validation data. The primary application of this model is to predict detection of military targets in daylight with the unaided eye. The model also has application to target detection prediction using infrared night vision systems. The model has potential as a tool to evaluate the visual properties of more general task settings.

  9. Computational modeling of neurostimulation in brain diseases.

    PubMed

    Wang, Yujiang; Hutchings, Frances; Kaiser, Marcus

    2015-01-01

    Neurostimulation as a therapeutic tool has been developed and used for a range of different diseases such as Parkinson's disease, epilepsy, and migraine. However, it is not known why the efficacy of the stimulation varies dramatically across patients or why some patients suffer from severe side effects. This is largely due to the lack of mechanistic understanding of neurostimulation. Hence, theoretical computational approaches to address this issue are in demand. This chapter provides a review of mechanistic computational modeling of brain stimulation. In particular, we will focus on brain diseases, where mechanistic models (e.g., neural population models or detailed neuronal models) have been used to bridge the gap between cellular-level processes of affected neural circuits and the symptomatic expression of disease dynamics. We show how such models have been, and can be, used to investigate the effects of neurostimulation in the diseased brain. We argue that these models are crucial for the mechanistic understanding of the effect of stimulation, allowing for a rational design of stimulation protocols. Based on mechanistic models, we argue that the development of closed-loop stimulation is essential in order to avoid inference with healthy ongoing brain activity. Furthermore, patient-specific data, such as neuroanatomic information and connectivity profiles obtainable from neuroimaging, can be readily incorporated to address the clinical issue of variability in efficacy between subjects. We conclude that mechanistic computational models can and should play a key role in the rational design of effective, fully integrated, patient-specific therapeutic brain stimulation. © 2015 Elsevier B.V. All rights reserved.

  10. Molecular Sieve Bench Testing and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  11. Computational Modeling of Pollution Transmission in Rivers

    NASA Astrophysics Data System (ADS)

    Parsaie, Abbas; Haghiabi, Amir Hamzeh

    2015-08-01

    Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.

  12. Computational Modeling of Pollution Transmission in Rivers

    NASA Astrophysics Data System (ADS)

    Parsaie, Abbas; Haghiabi, Amir Hamzeh

    2017-06-01

    Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.

  13. Computational continuum modeling of solder interconnects: Applications

    SciTech Connect

    Burchett, S.N.; Neilsen, M.K.; Frear, D.R.

    1997-04-01

    The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Fb alloy. This alloy has a number of processing advantages (suitable melting point of 183C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes the prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder, which is currently under development, was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of a mini ball grid array solder interconnect. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the thermomechanical response of a mini ball grid array solder interconnects then will be presented.

  14. Computational modeling of material aging effects

    SciTech Connect

    Fang, H.E.

    1996-07-01

    Progress is being made in our efforts to develop computational models for predicting material property changes in weapon components due to aging. The first version of a two-dimensional lattice code for modeling thermomechanical fatigue, such as has been observed in solder joints on electronic components removed from the stockpile, has been written and tested. The code does a good qualitative job of presenting intergranular and/or transgranular cracking in a polycrystalline material when under thermomechanical deformation. The current progress is an encouraging start for our long term effort to develop multi-level simulation capabilities, with the technology of high performance computing, for predicting age-related effects on the reliability of weapons.

  15. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1993-01-01

    Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.

  16. A computer model of auditory stream segregation.

    PubMed

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  17. Computer modeling and simulation of human movement.

    PubMed

    Pandy, M G

    2001-01-01

    Recent interest in using modeling and simulation to study movement is driven by the belief that this approach can provide insight into how the nervous system and muscles interact to produce coordinated motion of the body parts. With the computational resources available today, large-scale models of the body can be used to produce realistic simulations of movement that are an order of magnitude more complex than those produced just 10 years ago. This chapter reviews how the structure of the neuromusculoskeletal system is commonly represented in a multijoint model of movement, how modeling may be combined with optimization theory to simulate the dynamics of a motor task, and how model output can be analyzed to describe and explain muscle function. Some results obtained from simulations of jumping, pedaling, and walking are also reviewed to illustrate the approach.

  18. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  19. AMAR: A Computational Model of Autosegmental Phonology

    DTIC Science & Technology

    1993-10-01

    the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t

  20. Wild Fire Computer Model Helps Firefighters

    ScienceCinema

    Canfield, Jesse

    2016-07-12

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  1. Computational Biology: Modeling Chronic Renal Allograft Injury

    PubMed Central

    Stegall, Mark D.; Borrows, Richard

    2015-01-01

    New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury. PMID:26284070

  2. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  3. Wild Fire Computer Model Helps Firefighters

    SciTech Connect

    Canfield, Jesse

    2012-09-04

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  4. Computed structures of polyimides model compounds

    NASA Technical Reports Server (NTRS)

    Tai, H.; Phillips, D. H.

    1990-01-01

    Using a semi-empirical approach, a computer study was made of 8 model compounds of polyimides. The compounds represent subunits from which NASA Langley Research Center has successfully synthesized polymers for aerospace high performance material application, including one of the most promising, LARC-TPI polymer. Three-dimensional graphic display as well as important molecular structure data pertaining to these 8 compounds are obtained.

  5. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  6. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  7. Computational Modeling and Simulation of Developmental ...

    EPA Pesticide Factsheets

    Standard practice for assessing developmental toxicity is the observation of apical endpoints (intrauterine death, fetal growth retardation, structural malformations) in pregnant rats/rabbits following exposure during organogenesis. EPA’s computational toxicology research program (ToxCast) generated vast in vitro cellular and molecular effects data on >1858 chemicals in >600 high-throughput screening (HTS) assays. The diversity of assays has been increased for developmental toxicity with several HTS platforms, including the devTOX-quickPredict assay from Stemina Biomarker Discovery utilizing the human embryonic stem cell line (H9). Translating these HTS data into higher order-predictions of developmental toxicity is a significant challenge. Here, we address the application of computational systems models that recapitulate the kinematics of dynamical cell signaling networks (e.g., SHH, FGF, BMP, retinoids) in a CompuCell3D.org modeling environment. Examples include angiogenesis (angiodysplasia) and dysmorphogenesis. Being numerically responsive to perturbation, these models are amenable to data integration for systems Toxicology and Adverse Outcome Pathways (AOPs). The AOP simulation outputs predict potential phenotypes based on the in vitro HTS data ToxCast. A heuristic computational intelligence framework that recapitulates the kinematics of dynamical cell signaling networks in the embryo, together with the in vitro profiling data, produce quantitative predic

  8. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  9. ADGEN: ADjoint GENerator for computer models

    SciTech Connect

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  10. Computational fire modeling for aircraft fire research

    SciTech Connect

    Nicolette, V.F.

    1996-11-01

    This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.

  11. Computational acoustic modeling of cetacean vocalizations

    NASA Astrophysics Data System (ADS)

    Gurevich, Michael Dixon

    A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it

  12. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  13. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  14. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  15. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  16. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  17. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  18. Computer model of in situ leaching hydrology

    SciTech Connect

    Not Available

    1981-05-01

    A computer program developed by the US Bureau of Mines simulates the hydrologic activity associated with in situ mining. Its purpose is to determine the site specific flow behavior of leachants and groundwater during development, production, and resotration phases of an in situ leaching operations. Model capabilities include arbitrary well patterns and pumping schedules, partially penetrating well screens, directionally anisotropic permeability and natural groundwater flow, in either leaky or nonleaky, confined aquifers and under steady state or time dependent flow conditions. In addition to extensive laboratory testing, the Twin Cites Research Center has closely monitored the application of this model at three different mine sites, and at each site, the solution breakthrough time and the hydraulic head at observation wells were used to tune the model. The model was then used satisfactorily to assess suitability of various well configurations and pumping schedules, in terms of fluid dispersion within the ore pod and fluid excursions into the surrounding aquifer. (JMT)

  19. A Neural Computational Model of Incentive Salience

    PubMed Central

    Zhang, Jun; Berridge, Kent C.; Tindell, Amy J.; Smith, Kyle S.; Aldridge, J. Wayne

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered ‘wanting’ only by

  20. A neural computational model of incentive salience.

    PubMed

    Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne

    2009-07-01

    Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating

  1. Computational Model of Fluorine-20 Experiment

    NASA Astrophysics Data System (ADS)

    Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne

    2015-10-01

    The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.

  2. A Computational Model of Cerebral Cortex Folding

    PubMed Central

    Nie, Jingxin; Guo, Lei; Li, Gang; Faraco, Carlos; Miller, L Stephen; Liu, Tianming

    2010-01-01

    The geometric complexity and variability of the human cerebral cortex has long intrigued the scientific community. As a result, quantitative description of cortical folding patterns and the understanding of underlying folding mechanisms have emerged as important research goals. This paper presents a computational 3-dimensional geometric model of cerebral cortex folding initialized by MRI data of a human fetal brain and deformed under the governance of a partial differential equation modeling cortical growth. By applying different simulation parameters, our model is able to generate folding convolutions and shape dynamics of the cerebral cortex. The simulations of this 3D geometric model provide computational experimental support to the following hypotheses: 1) Mechanical constraints of the skull regulate the cortical folding process. 2) The cortical folding pattern is dependent on the global cell growth rate of the whole cortex. 3) The cortical folding pattern is dependent on relative rates of cell growth in different cortical areas. 4) The cortical folding pattern is dependent on the initial geometry of the cortex. PMID:20167224

  3. Computer model of tetrahedral amorphous diamond

    NASA Astrophysics Data System (ADS)

    Djordjević, B. R.; Thorpe, M. F.; Wooten, F.

    1995-08-01

    We computer generate a model of amorphous diamond using the Wooten-Weaire method, with fourfold coordination everywhere. We investigate two models: one where four-membered rings are allowed and the other where the four-membered rings are forbidden; each model consisting of 4096 atoms. Starting from the perfect diamond crystalline structure, we first randomize the structure by introducing disorder through random bond switches at a sufficiently high temperature. Subsequently, the temperature is reduced in stages, and the topological and geometrical relaxation of the structure takes place using the Keating potential. After a long annealing process, a random network of comparatively low energy is obtained. We calculate the pair distribution function, mean bond angle, rms angular deviation, rms bond length, rms bond-length deviation, and ring statistics for the final relaxed structures. We minimize the total strain energy by adjusting the density of the sample. We compare our results with similar computer-generated models for amorphous silicon, and with experimental measurement of the structure factor for (predominantly tetrahedral) amorphous carbon.

  4. Computational Modeling of Distal Protection Filters

    PubMed Central

    Siewiorek, Gail M.; Finol, Ender A.

    2010-01-01

    Purpose: To quantify the relationship between velocity and pressure gradient in a distal protection filter (DPF) and to determine the feasibility of modeling a DPF as a permeable surface using computational fluid dynamics (CFD). Methods: Four DPFs (Spider RX, FilterWire EZ, RX Accunet, and Emboshield) were deployed in a single tube representing the internal carotid artery (ICA) in an in vitro flow apparatus. Steady flow of a blood-like solution was circulated with a peristaltic pump and compliance chamber. The flow rate through each DPF was measured at physiological pressure gradients, and permeability was calculated using Darcy's equation. Two computational models representing the RX Accunet were created: an actual representation of the filter geometry and a circular permeable surface. The permeability of RX Accunet was assigned to the surface, and CFD simulations were conducted with both models using experimentally derived boundary conditions. Results: Spider RX had the largest permeability while RX Accunet was the least permeable filter. CFD modeling of RX Accunet and the permeable surface resulted in excellent agreement with the experimental measurements of velocity and pressure gradient. However, the permeable surface model did not accurately reproduce local flow patterns near the DPF deployment site. Conclusion: CFD can be used to model DPFs, yielding global flow parameters measured with bench-top experiments. CFD models of the detailed DPF geometry could be used for “virtual testing” of device designs under simulated flow conditions, which would have potential benefits in decreasing the number of design iterations leading up to in vivo testing. PMID:21142490

  5. Computer simulations of statistical models of earthquakes

    NASA Astrophysics Data System (ADS)

    Xia, Junchao

    The frequency-size distribution of earthquake fault systems in nature has been observed to exhibit Gutenberg-Richter (power-law) scaling. Computer simulations of earthquake fault models have been performed to understand the mechanisms for this and other observed behavior. Understanding driven dissipative systems is also important in physics and related areas. A simple model that contains the essential physics of earthquake faults is the Burridge-Knopoff spring-block model, which incorporates inertia and a velocity-weakening friction force. To save computer time, the Burridge-Knopoff model has been simplified by neglecting inertia and assuming a moving block is overdamped. These cellular automata models show scaling behavior, but only for long-range stress transfer. I generalized the original nearest-neighbor Burridge-Knopoff model to incorporate a variable interaction range and did simulations to see whether the long-range Burridge-Knopoff model exhibits behavior similar to the long-range cellular automata models. I found that the Burridge-Knopoff model exhibits richer behavior than the cellular automata models, depending on the range R of the stress transfer and the friction parameter alpha, which controls how quickly the friction force deceases with increasing velocity. My main result is that there exists two scaling regimes with qualitatively different behavior. One regime is for alpha ≲ 1 and R ≫ 1 and is associated with an equilibrium spinodal critical point, consistent with the long-range cellular automata models. The other regime corresponds to alpha ≳ 1 and R = 1 and might be associated with another critical point. This latter interpretation has been given by previous workers, but the nature of the critical point needs more study. I also simulated the long-range Olami-Feder-Christensen cellular automata model. In the mean-field limit, the scaling of the distribution of the number of block in an event can be understood by spinodal nucleation theory

  6. Computational Modeling and Validation for Hypersonic Inlets

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    Hypersonic inlet research activity at NASA is reviewed. The basis for the paper is the experimental tests performed with three inlets: the NASA Lewis Research Center Mach 5, the McDonnell Douglas Mach 12, and the NASA Langley Mach 18. Both three-dimensional PNS and NS codes have been used to compute the flow within the three inlets. Modeling assumptions in the codes involve the turbulence model, the nature of the boundary layer, shock wave-boundary layer interaction, and the flow spilled to the outside of the inlet. Use of the codes and the experimental data are helping to develop a clearer understanding of the inlet flow physics and to focus on the modeling improvements required in order to arrive at validated codes.

  7. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  8. Modeling Reality - How Computers Mirror Life

    NASA Astrophysics Data System (ADS)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  9. Computer Model Used to Help Customize Medicine

    NASA Technical Reports Server (NTRS)

    Stauber, Laurel J.; Veris, Jenise

    2001-01-01

    Dr. Radhakrishnan, a researcher at the NASA Glenn Research Center, in collaboration with biomedical researchers at the Case Western Reserve University School of Medicine and Rainbow Babies and Children's Hospital, is developing computational models of human physiology that quantitate metabolism and its regulation, in both healthy and pathological states. These models can help predict the effects of stresses or interventions, such as drug therapies, and contribute to the development of customized medicine. Customized medical treatment protocols can give more comprehensive evaluations and lead to more specific and effective treatments for patients, reducing treatment time and cost. Commercial applications of this research may help the pharmaceutical industry identify therapeutic needs and predict drug-drug interactions. Researchers will be able to study human metabolic reactions to particular treatments while in different environments as well as establish more definite blood metabolite concentration ranges in normal and pathological states. These computational models may help NASA provide the background for developing strategies to monitor and safeguard the health of astronauts and civilians in space stations and colonies. They may also help to develop countermeasures that ameliorate the effects of both acute and chronic space exposure.

  10. Computational model of a whole tree combustor

    SciTech Connect

    Bryden, K.M.; Ragland, K.W.

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  11. Computational modeling of Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-12-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  12. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  13. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  14. Model-based neuroimaging for cognitive computing.

    PubMed

    Poznanski, Roman R

    2009-09-01

    The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.

  15. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  16. A Computational Model of Multidimensional Shape

    PubMed Central

    Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo

    2010-01-01

    We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668

  17. A Computational Model of Multidimensional Shape.

    PubMed

    Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo; Mio, Washington

    2010-08-01

    We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data.

  18. Computational modeling of the amphibian thyroid axis ...

    EPA Pesticide Factsheets

    In vitro screening of chemicals for bioactivity together with computational modeling are beginning to replace animal toxicity testing in support of chemical risk assessment. To facilitate this transition, an amphibian thyroid axis model has been developed to describe thyroid homeostasis during Xenopus laevis pro-metamorphosis. The model simulates the dynamic relationships of normal thyroid biology throughout this critical period of amphibian development and includes molecular initiating events (MIEs) for thyroid axis disruption to allow in silico simulations of hormone levels following chemical perturbations. One MIE that has been formally described using the adverse outcome pathway (AOP) framework is thyroperoxidase (TPO) inhibition. The goal of this study was to refine the model parameters and validate model predictions by generating dose-response and time-course biochemical data following exposure to three TPO inhibitors, methimazole, 6-propylthiouracil and 2-mercaptobenzothiazole. Key model variables including gland and blood thyroid hormone (TH) levels were compared to empirical values measured in biological samples at 2, 4, 7 and 10 days following initiation of exposure at Nieuwkoop and Faber (NF) stage 54 (onset of pro-metamorphosis). The secondary objective of these studies was to relate depleted blood TH levels to delayed metamorphosis, the adverse apical outcome. Delayed metamorphosis was evaluated by continuing exposure with a subset of larvae until a

  19. Computational modeling of the amphibian thyroid axis ...

    EPA Pesticide Factsheets

    In vitro screening of chemicals for bioactivity together with computational modeling are beginning to replace animal toxicity testing in support of chemical risk assessment. To facilitate this transition, an amphibian thyroid axis model has been developed to describe thyroid homeostasis during Xenopus laevis pro-metamorphosis. The model simulates the dynamic relationships of normal thyroid biology throughout this critical period of amphibian development and includes molecular initiating events (MIEs) for thyroid axis disruption to allow in silico simulations of hormone levels following chemical perturbations. One MIE that has been formally described using the adverse outcome pathway (AOP) framework is thyroperoxidase (TPO) inhibition. The goal of this study was to refine the model parameters and validate model predictions by generating dose-response and time-course biochemical data following exposure to three TPO inhibitors, methimazole, 6-propylthiouracil and 2-mercaptobenzothiazole. Key model variables including gland and blood thyroid hormone (TH) levels were compared to empirical values measured in biological samples at 2, 4, 7 and 10 days following initiation of exposure at Nieuwkoop and Faber (NF) stage 54 (onset of pro-metamorphosis). The secondary objective of these studies was to relate depleted blood TH levels to delayed metamorphosis, the adverse apical outcome. Delayed metamorphosis was evaluated by continuing exposure with a subset of larvae until a

  20. Teaching 1H NMR Spectrometry Using Computer Modeling.

    ERIC Educational Resources Information Center

    Habata, Yoichi; Akabori, Sadatoshi

    2001-01-01

    Molecular modeling by computer is used to display stereochemistry, molecular orbitals, structure of transition states, and progress of reactions. Describes new ideas for teaching 1H NMR spectroscopy using computer modeling. (Contains 12 references.) (ASK)

  1. Computer modeling of thermoelectric generator performance

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  2. Computer modeling of thermoelectric generator performance

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  3. Computational models of intergroup competition and warfare.

    SciTech Connect

    Letendre, Kenneth; Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  4. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  5. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  6. Electromagnetic physics models for parallel computing architectures

    SciTech Connect

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.

  7. A computational model of motor neuron degeneration.

    PubMed

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    PubMed Central

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  9. Multiscale computational modelling of the heart

    NASA Astrophysics Data System (ADS)

    Smith, N. P.; Nickerson, D. P.; Crampin, E. J.; Hunter, P. J.

    A computational framework is presented for integrating the electrical, mechanical and biochemical functions of the heart. Finite element techniques are used to solve the large-deformation soft tissue mechanics using orthotropic constitutive laws based in the measured fibre-sheet structure of myocardial (heart muscle) tissue. The reaction-diffusion equations governing electrical current flow in the heart are solved on a grid of deforming material points which access systems of ODEs representing the cellular processes underlying the cardiac action potential. Navier-Stokes equations are solved for coronary blood flow in a system of branching blood vessels embedded in the deforming myocardium and the delivery of oxygen and metabolites is coupled to the energy-dependent cellular processes. The framework presented here for modelling coupled physical conservation laws at the tissue and organ levels is also appropriate for other organ systems in the body and we briefly discuss applications to the lungs and the musculo-skeletal system. The computational framework is also designed to reach down to subcellular processes, including signal transduction cascades and metabolic pathways as well as ion channel electrophysiology, and we discuss the development of ontologies and markup language standards that will help link the tissue and organ level models to the vast array of gene and protein data that are now available in web-accessible databases.

  10. Radiative cooling computed for model atmospheres

    NASA Astrophysics Data System (ADS)

    Eriksson, T. S.; Granqvist, C. G.

    1982-12-01

    The radiative cooling power and temperature drop of horizontal surfaces are evaluated on the basis of calculations of spectral radiance from model atmospheres representative of various climatic conditions. Calculations of atmospheric radiance from the zenith and from off-zenith angles were performed with the LOWTRAN 5 atmospheric transmittance/radiance computer code (Kneizys et al., 1980) for model atmospheres corresponding to the tropics, midlatitude summer, midlatitude winter, subarctic summer, subarctic winter and the 1962 U.S. standard atmosphere. Comparison of the computed spectral radiance curves with the radiative fluxes from blackbody surfaces and ideal infrared-selective surfaces (having reflectance in the 8-13 micron range and unity reflectance elsewhere) at various ambient-surface temperature differences shows cooling powers to lie between 58 and 113 W/sq m at ambient temperature for a freely radiating surface, with maximum temperature differences of 11-21 C for a blackbody and 18-33 C for an infrared-selective surface. Both cooling powers and temperature differences were higher for surfaces exposed only to atmospheric zenith radiance. In addition, water vapor content is found to affect strongly the radiative cooling, while ozone and aerosol contents had little effect.

  11. Computational continuum modeling of solder interconnects

    SciTech Connect

    Burchett, S.N.; Neilsen, M.K.; Frear, D.R.; Stephens, J.J.

    1997-03-01

    The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Pb alloy. This alloy has a number of processing advantages (suitable melting point of 183 C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder which is currently in development was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of various leadless chip carrier solder interconnects to determine the effects of variations in geometry and loading. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the effect of geometry and loading variations on leadless chip carrier solder interconnects then will be presented.

  12. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  13. Computational Modeling and Simulation of Genital Tubercle ...

    EPA Pesticide Factsheets

    Hypospadias is a developmental defect of urethral tube closure that has a complex etiology. Here, we describe a multicellular agent-based model of genital tubercle development that simulates urethrogenesis from the urethral plate stage to urethral tube closure in differentiating male embryos. The model, constructed in CompuCell3D, implemented spatially dynamic signals from SHH, FGF10, and androgen signaling pathways. These signals modulated stochastic cell behaviors, such as differential adhesion, cell motility, proliferation, and apoptosis. Urethral tube closure was an emergent property of the model that was quantitatively dependent on SHH and FGF10 induced effects on mesenchymal proliferation and endodermal apoptosis, ultimately linked to androgen signaling. In the absence of androgenization, simulated genital tubercle development defaulted to the female condition. Intermediate phenotypes associated with partial androgen deficiency resulted in incomplete closure. Using this computer model, complex relationships between urethral tube closure defects and disruption of underlying signaling pathways could be probed theoretically in multiplex disturbance scenarios and modeled into probabilistic predictions for individual risk for hypospadias and potentially other developmental defects of the male genital tubercle. We identify the minimal molecular network that determines the outcome of male genital tubercle development in mice.

  14. Computer Modeling of Non-Isothermal Crystallization

    NASA Technical Reports Server (NTRS)

    Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.

    1996-01-01

    A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.

  15. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  16. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  17. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-12-18

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  18. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  19. Continuum and computational modeling of flexoelectricity

    NASA Astrophysics Data System (ADS)

    Mao, Sheng

    Flexoelectricity refers to the linear coupling of strain gradient and electric polarization. Early studies of this subject mostly look at liquid crystals and biomembranes. Recently, the advent of nanotechnology revealed its importance also in solid structures, such as flexible electronics, thin films, energy harvesters, etc. The energy storage function of a flexoelectric solid depends not only on polarization and strain, but also strain-gradient. This is our basis to formulate a consistent model of flexoelectric solids under small deformation. We derive a higher-order Navier equation for linear isotropic flexoelectric materials which resembles that of Mindlin in gradient elasticity. Closed-form solutions can be obtained for problems such as beam bending, pressurized tube, etc. Flexoelectric coupling can be enhanced in the vicinity of defects due to strong gradients and decay away in far field. We quantify this expectation by computing elastic and electric fields near different types of defects in flexoelectric solids. For point defects, we recover some well-known results of non-local theories. For dislocations, we make connections with experimental results on NaCl, ice, etc. For cracks, we perform a crack-tip asymptotic analysis and the results share features from gradient elasticity and piezoelectricity. We compute the J integral and use it for determining fracture criteria. Conventional finite element methods formulated solely on displacement are inadequate to treat flexoelectric solids due to higher order governing equations. Therefore, we introduce a mixed formulation which uses displacement and displacement-gradient as separate variables. Their known relation is constrained in a weighted integral sense. We derive a variational formulation for boundary value problems for piezeo- and/or flexoelectric solids. We validate this computational framework against exact solutions. With this method more complex problems, including a plate with an elliptical hole

  20. Preliminary Phase Field Computational Model Development

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  1. Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Cummings, P. T.

    2010-02-08

    This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

  2. Modeling groundwater flow on massively parallel computers

    SciTech Connect

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  3. Comprehensive silicon solar cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.

  4. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  5. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1992-01-01

    The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.

  6. A computer model of the earth's magnetosphere

    NASA Astrophysics Data System (ADS)

    Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha

    1988-03-01

    The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.

  7. A computer model of the earth's magnetosphere

    NASA Technical Reports Server (NTRS)

    Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha

    1988-01-01

    The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.

  8. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  9. A Computational Model of Fraction Arithmetic.

    PubMed

    Braithwaite, David W; Pyke, Aryn A; Siegler, Robert S

    2017-04-27

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it with the problems from a widely used textbook series. The simulation generated many phenomena of children's fraction arithmetic performance through a small number of common learning mechanisms operating on a biased input set. The biases were not unique to this textbook series-they were present in 2 other textbook series as well-nor were the phenomena unique to a particular sample of children-they were present in another sample as well. Among other phenomena, the model predicted the high difficulty of fraction division, variable strategy use by individual children and on individual problems, relative frequencies of different types of strategy errors on different types of problems, and variable effects of denominator equality on the four arithmetic operations. The model also generated nonintuitive predictions regarding the relative difficulties of several types of problems and the potential effectiveness of a novel instructional approach. Perhaps the most general lesson of the findings is that the statistical distribution of problems that learners encounter can influence mathematics learning in powerful and nonintuitive ways. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  11. Computational model of heterogeneous heating in melanin

    NASA Astrophysics Data System (ADS)

    Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.

    2015-03-01

    Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.

  12. A computational learning model for metrical phonology.

    PubMed

    Dresher, B E; Kaye, J D

    1990-02-01

    One of the major challenges to linguistic theory is the solution of what has been termed the "projection problem". Simply put, linguistics must account for the fact that starting from a data base that is both unsystematic and relatively small, a human child is capable of constructing a grammar that mirrors, for all intents and purposes, the adult system. In this article we shall address ourselves to the question of the learnability of a postulated subsystem of phonological structure: the stress system. We shall describe a computer program which is designed to acquire this subpart of linguistic structure. Our approach follows the "principles and parameters" model of Chomsky (1981a, b). This model is particularly interesting from both a computational point of view and with respect to the development of learning theories. We encode the relevant aspects of universal grammar (UG)--those aspects of linguistic structure that are presumed innate and thus present in every linguistic system. The learning process consists of fixing a number of parameters which have been shown to underlie stress systems and which should, in principle, lead the learner to the postulation of the system from which the primary linguistic data (i.e., the input to the learner) is drawn. We go on to explore certain formal and substantive properties of this learning system. Questions such as cross-parameter dependencies, determinism, subsets, and incremental versus all-at-once learning are raised and discussed in the article. The issues raised by this study provide another perspective on the formal structure of stress systems and the learnability of parameter systems in general.

  13. Gravothermal Star Clusters - Theory and Computer Modelling

    NASA Astrophysics Data System (ADS)

    Spurzem, Rainer

    2010-11-01

    In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.

  14. Getting Mental Models and Computer Models to Cooperate

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Roseborough, J.; Charney, L.; Mendel, M.

    1984-01-01

    A qualitative theory of supervisory control is outlined wherein the mental models of one or more human operators are related to the knowledge representations within automatic controllers (observers, estimators) and operator decision aids (expert systems, advice-givers). Methods of quantifying knowledge and the calibration of one knowledge representation to another (human, computer, or objective truth) are discussed. Ongoing experiments in the use of decision aids for exploring one's own objective function or exploring system constraints and control strategies are described.

  15. Modeling Human-Computer Decision Making with Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Coovert, Michael D.; And Others

    Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…

  16. Model-based parameter estimation in electromagnetic computer modeling

    NASA Astrophysics Data System (ADS)

    Demarest, Kenneth R.

    1989-04-01

    Modeling in Computational Electromagnetics (CEM) can be a numerically demanding exercise. There are essentially two factors that contribute to this situation. One is the need to describe the propagation of the electromagnetic field via the Maxwell curl equations, Green's function, mode expansions, or ray and geometrical optics. It is in this part of the problem that a source-field relationship is quantitatively developed. The other is the subsequent need to invert the source-field relationship to proceed from prescribed existing fields and known sources to the induced sources that result and the fields they consequently produce. A moment-method solution, based on an integral equation formulation, embodies both of these factors. There are basically two paths by which the computer times involved in CEM applications might be reduced. One would be the development of alternate formulations that reduce the time required for either of the activities listed above, or that eliminate the need for it completely. The geometrical theory of diffraction is one example of this path. The other would be the development of more efficient numerical approaches for implementing the moment-method model. Under this contract we have investigated several means of reducing the computation time involved in the applications of integral equation, moment-method modeling.

  17. Enhanced pre-computed finite element models for surgical simulation.

    PubMed

    Zhong, Hualiang; Wachowiak, Mark P; Peters, Terry M

    2005-01-01

    Soft tissue modeling is an important component in effective surgical simulation systems. A pre-computed finite element method based on elastic models is well suited to modeling soft tissue deformation. This paper addresses two principal issues: the flexibility of the pre-computed FE method and the approximation approach to non-linear elastic models. We describe a dynamic mechanism of the reconfiguration of the contacted nodes and the fixed boundary, without re-computing the inverse of the global stiffness matrix. The flexibility of the pre-computed models is described for both linear and non-linear elastic models.

  18. Computational Modeling of Uranium Hydriding and Complexes

    SciTech Connect

    Balasubramanian, K; Siekhaus, W J; McLean, W

    2003-02-03

    et al. have studied U-hydriding in ultrahigh vacuum and obtained the linear rate data over a wide range of temperatures and pressures. They found reversible hydrogen sorption on the UH{sub 3} reaction product from kinetic effects at 21 C. This demonstrates restarting of the hydriding process in the presence of UH{sub 3} reaction product. DeMint and Leckey have shown that Si impurities dramatically accelerate the U-hydriding rates. We report our recent results of relativistic computations that vary from complete active space multi-configuration interaction (CAS-MCSCF) followed by multi-reference configuration interaction (MRSDCI) computations that included up to 50 million configurations for modeling of uranium-hydriding with cluster models will be presented.

  19. Computer modelling of metal - oxide interfaces

    NASA Astrophysics Data System (ADS)

    Purton, J.; Parker, S. C.; Bullett, D. W.

    1997-07-01

    We have used atomistic simulations to model oxide - metal interfaces. We have, for the first time, allowed the atoms on both sides of the interface to relax. The efficiency of the computational method means that calculations can be performed on complex interfaces containing several thousand atoms and do not require an arbitrary definition of the image plane to model the electrostatics across the dielectric discontinuity. We demonstrate the viability of the approach and the effect of relaxation on a range of MgO - Ag interfaces. Defective and faceted interfaces, as well as the ideal case, have been studied. The latter was chosen for comparison with previous theoretical calculations and experimental results. The wetting angle 0953-8984/9/27/004/img7 and work of adhesion 0953-8984/9/27/004/img8 for MgO{100} - Ag{100} are in reasonable agreement with experiment. As with ab initio electronic structure calculations the silver atoms have been shown to favour the position above the oxygen site.

  20. Computational modeling of acute myocardial infarction.

    PubMed

    Sáez, P; Kuhl, E

    2016-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size.

  1. Computational and Organotypic Modeling of Microcephaly ...

    EPA Pesticide Factsheets

    Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computational and experimental models to probe the underlying molecular targets, cellular consequences, and biological processes. We describe an Adverse Outcome Pathway (AOP) framework for microcephaly derived from literature on all gene-, chemical-, or viral- effects and brain development. Overlap with NTDs is likely, although the AOP connections identified here focused on microcephaly as the adverse outcome. A query of the Mammalian Phenotype Browser database for ‘microcephaly’ (MP:0000433) returned 85 gene associations; several function in microtubule assembly and centrosome cycle regulated by (microcephalin, MCPH1), a gene for primary microcephaly in humans. The developing ventricular zone is the likely target. In this zone, neuroprogenitor cells (NPCs) self-replicate during the 1st trimester setting brain size, followed by neural differentiation of the neocortex. Recent studies with human NPCs confirmed infectivity with Zika virions invoking critical cell loss (apoptosis) of precursor NPCs; similar findings have been shown with fetal alcohol or methylmercury exposure in rodent studies, leading to mathematical models of NPC dynamics in size determination of the ventricular zone. A key event

  2. Computer modeling of complete IC fabrication process

    NASA Astrophysics Data System (ADS)

    Dutton, Robert W.

    1987-05-01

    The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.

  3. Computational modeling of acute myocardial infarction

    PubMed Central

    Sáez, P.; Kuhl, E.

    2015-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step towards simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size. PMID:26583449

  4. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  5. Climate Change Modeling:Computational Opportunities and Challenges

    SciTech Connect

    Wang, Dali; Post, Wilfred M; Wilson, Bruce E

    2011-01-01

    High- delity climate models are the workhorses of modern climate change sciences. In this article, the authors focus on several computational issues associated with climate change modeling, covering simulation methodologies, temporal and spatial modeling restrictions, the role of high-end computing, as well as the importance of data-driven regional climate impact modeling.

  6. Computational and Modeling Strategies for Cell Motility

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  7. Computational modeling of composite material fires.

    SciTech Connect

    Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.

    2010-10-01

    condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.

  8. Computer models for kinetic equations of magnetically confined plasmas

    SciTech Connect

    Killeen, J.; Kerbel, G.D.; McCoy, M.G.; Mirin, A.A.; Horowitz, E.J.; Shumaker, D.E.

    1987-01-01

    This paper presents four working computer models developed by the computational physics group of the National Magnetic Fusion Energy Computer Center. All of the models employ a kinetic description of plasma species. Three of the models are collisional, i.e., they include the solution of the Fokker-Planck equation in velocity space. The fourth model is collisionless and treats the plasma ions by a fully three-dimensional particle-in-cell method.

  9. Computational modeling of solid oxide fuel cell

    NASA Astrophysics Data System (ADS)

    Penmetsa, Satish Kumar

    In the ongoing search for alternative and environmentally friendly power generation facilities, the solid oxide fuel cell (SOFC) is considered one of the prime candidates for the next generation of energy conversion devices due to its capability to provide environmentally friendly and highly efficient power generation. Moreover, SOFCs are less sensitive to composition of fuel as compared to other types of fuel cells, and internal reforming of the hydrocarbon fuel cell can be performed because of higher operating temperature range of 700°C--1000°C. This allows us to use different types of hydrocarbon fuels in SOFCs. The objective of this study is to develop a three-dimensional computational model for the simulation of a solid oxide fuel cell unit to analyze the complex internal transport mechanisms and sensitivity of the cell with different operating conditions, and also to develop SOFC with higher operating current density with a more uniform gas distributions in the electrodes and with lower ohmic losses. This model includes mass transfer processes due to convection and diffusion in the gas flow channels based on the Navier-Stokes equations as well as combined diffusion and advection in electrodes using Brinkman's hydrodynamic equation and associated electrochemical reactions in the trilayer of the SOFC. Gas transport characteristics in terms of three-dimensional spatial distributions of reactant gases and their effects on electrochemical reactions at the electrode-electrolyte interface, and in the resulting polarizations, are evaluated for varying pressure conditions. Results show the significance of the Brinkman's hydrodynamic model in electrodes to achieve more uniform gas concentration distributions while using a higher operating pressure and over a higher range of operating current densities.

  10. COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS

    SciTech Connect

    Mathur, M.P.; Freeman, Mark; Gera, Dinesh

    2001-11-06

    In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.

  11. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    SciTech Connect

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  12. Computer formulations of aircraft models for simulation studies

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1979-01-01

    Recent developments in formula manipulation compilers and the design of several symbol manipulation languages, enable computers to be used for symbolic mathematical computation. A computer system and language that can be used to perform symbolic manipulations in an interactive mode are used to formulate a mathematical model of an aeronautical system. The example demonstrates that once the procedure is established, the formulation and modification of models for simulation studies can be reduced to a series of routine computer operations.

  13. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  14. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  15. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  16. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  17. A Taxonomy of Analytical Computer Performance Models for Computer Design.

    DTIC Science & Technology

    1986-01-01

    is analogous to the periodic table of elements discovered by Mendeleyev. We saw two major benefits of such a table. First, it would provide a useful...structure for illustrating and studying the relationships between various performance models. The periodic table of elements provides such a logical...of the periodic table of elements , and examines the parallels between the two structures. Over 52 years passed between the first attempts (1817) to

  18. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  19. Precise orbit computation and sea surface modeling

    NASA Technical Reports Server (NTRS)

    Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.

    1991-01-01

    The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.

  20. Precise orbit computation and sea surface modeling

    NASA Technical Reports Server (NTRS)

    Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.

    1991-01-01

    The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.

  1. Computational modeling of ion transport through nanopores.

    PubMed

    Modi, Niraj; Winterhalter, Mathias; Kleinekathöfer, Ulrich

    2012-10-21

    Nanoscale pores are ubiquitous in biological systems while artificial nanopores are being fabricated for an increasing number of applications. Biological pores are responsible for the transport of various ions and substrates between the different compartments of biological systems separated by membranes while artificial pores are aimed at emulating such transport properties. As an experimental method, electrophysiology has proven to be an important nano-analytical tool for the study of substrate transport through nanopores utilizing ion current measurements as a probe for the detection. Independent of the pore type, i.e., biological or synthetic, and objective of the study, i.e., to model cellular processes of ion transport or electrophysiological experiments, it has become increasingly important to understand the dynamics of ions in nanoscale confinements. To this end, numerical simulations have established themselves as an indispensable tool to decipher ion transport processes through biological as well as artificial nanopores. This article provides an overview of different theoretical and computational methods to study ion transport in general and to calculate ion conductance in particular. Potential new improvements in the existing methods and their applications are highlighted wherever applicable. Moreover, representative examples are given describing the ion transport through biological and synthetic nanopores as well as the high selectivity of ion channels. Special emphasis is placed on the usage of molecular dynamics simulations which already have demonstrated their potential to unravel ion transport properties at an atomic level.

  2. Dynamical Properties of Polymers: Computational Modeling

    SciTech Connect

    CURRO, JOHN G.; ROTTACH, DANA; MCCOY, JOHN D.

    2001-01-01

    The free volume distribution has been a qualitatively useful concept by which dynamical properties of polymers, such as the penetrant diffusion constant, viscosity, and glass transition temperature, could be correlated with static properties. In an effort to put this on a more quantitative footing, we define the free volume distribution as the probability of finding a spherical cavity of radius R in a polymer liquid. This is identical to the insertion probability in scaled particle theory, and is related to the chemical potential of hard spheres of radius R in a polymer in the Henry's law limit. We used the Polymer Reference Interaction Site Model (PRISM) theory to compute the free volume distribution of semiflexible polymer melts as a function of chain stiffness. Good agreement was found with the corresponding free volume distributions obtained from MD simulations. Surprisingly, the free volume distribution was insensitive to the chain stiffness, even though the single chain structure and the intermolecular pair correlation functions showed a strong dependence on chain stiffness. We also calculated the free volume distributions of polyisobutylene (PIB) and polyethylene (PE) at 298K and at elevated temperatures from PRISM theory. We found that PIB has more of its free volume distributed in smaller size cavities than for PE at the same temperature.

  3. Design and computer modeling of the supracrystals

    NASA Astrophysics Data System (ADS)

    Karenin, A. A.

    2012-02-01

    The possibility of solid state crystalline structures, which we called supracrystals, is shown. Unlike ordinary crystals, there are no separate atoms or ions in the nodes of crystalline lattice. They are replaced by symmetric atomic associates. Symmetry classes of 2D- and 3D-supracrystals are determined. The bond length, energy per one atom, bond energy, and gap energy of 2D- and 3D-supracrystals consist of C, Si, B, N,S are calculated by the program ABINIT-5.8.4 in Hartree-Fock approximation. Unusual properties of supracrystals are of interest for their production and practical use. Energy stability and electrical properties of the nanotubes produced from 2D-sypracrystals suggested by the authors before are researched by computers modeling technic. Supracrystaline nanotubes from the atoms of carbon, silicon, boron and nitrogen, suphfur are considered. It is shown that the electrical properties of supracrystals nanotubes may be controlled in large range (from metallic to dielectric) changing their chemical composition, structure, diameter and chirality.

  4. Computational modeling of epidural cortical stimulation

    NASA Astrophysics Data System (ADS)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2008-12-01

    Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.

  5. Review of computational thermal-hydraulic modeling

    SciTech Connect

    Keefer, R.H.; Keeton, L.W.

    1995-12-31

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    NASA Astrophysics Data System (ADS)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  8. Experiments and simulation models of a basic computation element of an autonomous molecular computing system

    NASA Astrophysics Data System (ADS)

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  9. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    PubMed

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  10. Learning Anatomy: Do New Computer Models Improve Spatial Understanding?

    ERIC Educational Resources Information Center

    Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian

    1999-01-01

    Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)

  11. Graph Partitioning Models for Parallel Computing

    SciTech Connect

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  12. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-06-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  13. Performance Models for Split-execution Computing Systems

    SciTech Connect

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  14. Model for personal computer system selection.

    PubMed

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  15. Computational Modeling of Magnetically Actuated Propellant Orientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.

    1996-01-01

    sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.

  16. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  17. Idealized computational models for auditory receptive fields.

    PubMed

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.

  18. Computer modeling of a convective steam superheater

    NASA Astrophysics Data System (ADS)

    Trojan, Marcin

    2015-03-01

    Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.

  19. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  20. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways.

  1. Evaluation of aerothermal modeling computer programs

    NASA Technical Reports Server (NTRS)

    Hsieh, K. C.; Yu, S. T.

    1987-01-01

    Various computer programs based upon the SIMPLE or SIMPLER algorithm were studied and compared for numerical accuracy, efficiency, and grid dependency. Four two-dimensional and one three-dimensional code originally developed by a number of research groups were considered. In general, the accuracy and computational efficieny of these TEACH type programs were improved by modifying the differencing schemes and their solvers. A brief description of each program is given. Error reduction, spline flux and second upwind differencing programs are covered.

  2. Dissemination of computer skills among physicians: the infectious process model.

    PubMed

    Quinn, F B; Hokanson, J A; McCracken, M M; Stiernberg, C M

    1984-08-01

    While the potential utility of computer technology to medicine is often acknowledged, little is known as to the best methods to actually teach physicians about computers. The current variability in physician computer fluency implies there is no accepted minimum required level of computer skills for physicians. Special techniques are needed to instill these skills in the physician and measure their effects within the medical profession. This hypothesis is suggested following the development of a specialized course for the new physician. In a population of physicians where medical computing usage was considered nonexistent, intense interest developed the following exposure to a role model having strong credentials in both medicine and computer science. This produced an atmosphere where there was a perceived benefit in being knowledgeable about the medical computer usage. The subsequent increase in computer systems use was the result of the availability of resources and development of computer skills that could be exchanged among the students and faculty. This growth in computer use is described using the parameters of an infectious process model. While other approaches may also be useful, the infectious process model permits the growth of medical computer usage to be quantitatively described, evaluates specific determinants of use patterns, and allows the future growth of computer utilization in medicine to be predicted.

  3. Optimizing Computing Platforms for Climate-Driven Ecological Forecasting Models

    NASA Astrophysics Data System (ADS)

    Farley, S. S.; Williams, J. W.

    2016-12-01

    Species distribution models are widely used, climate-driven ecological forecasting tools that use machine-learning techniques to predict species range shifts and ecological responses to 21st century climate change. As high-resolution modern and fossil biodiversity data becomes increasingly available and statistical learning methods become more computationally intensive, choosing the correct computing configuration on which to run these models becomes more important. With a variety of low-cost cloud and desktop computing options available, users of forecasting models must balance performance gains achieved by provisioning more powerful hardware with the cost of using these resources. We present a framework for estimating the optimal computing solution for a given modeling activity. We argue that this framework is capable of identifying the optimal computing solution - the one that maximizes model accuracy while minimizing resource cost and computing time. Our framework is built on constituent models of algorithm execution time, predictive skill, and computing cost. We demonstrate the results of the framework using four leading species distribution models: multivariate adaptive regression splines, generalized additive models, support vector machines, and boosted regression trees. The constituent models themselves are shown to have high predictive accuracy, and can be used independently to estimate the effects of using larger input datasets, such as those that incorporate data from the fossil record. When used together, our framework shows highly significant predictive ability, and is designed to be used by researchers to inform future computing provisioning strategies.

  4. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Jerzy Bernholc

    2011-02-03

    will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.

  5. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  6. Computer Center: BASIC String Models of Genetic Information Transfer.

    ERIC Educational Resources Information Center

    Spain, James D., Ed.

    1984-01-01

    Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)

  7. Ocean Modeling and Visualization on Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  8. Computer Center: BASIC String Models of Genetic Information Transfer.

    ERIC Educational Resources Information Center

    Spain, James D., Ed.

    1984-01-01

    Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)

  9. Ocean Modeling and Visualization on Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  10. Flow Through a Laboratory Sediment Sample by Computer Simulation Modeling

    DTIC Science & Technology

    2006-09-07

    Flow through a laboratory sediment sample by computer simulation modeling R.B. Pandeya’b*, Allen H. Reeda, Edward Braithwaitea, Ray Seyfarth0, J.F...through a laboratory sediment sample by computer simulation modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  11. A Model for Guiding Undergraduates to Success in Computational Science

    ERIC Educational Resources Information Center

    Olagunju, Amos O.; Fisher, Paul; Adeyeye, John

    2007-01-01

    This paper presents a model for guiding undergraduates to success in computational science. A set of integrated, interdisciplinary training and research activities is outlined for use as a vehicle to increase and produce graduates with research experiences in computational and mathematical sciences. The model is responsive to the development of…

  12. Computer model for economic study of unbleached kraft paperboard production

    Treesearch

    Peter J. Ince

    1984-01-01

    Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...

  13. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  14. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  15. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  16. Overview of ASC Capability Computing System Governance Model

    SciTech Connect

    Doebling, Scott W.

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  17. Generate rigorous pyrolysis models for olefins production by computer

    SciTech Connect

    Klein, M.T.; Broadbelt, L.J.; Grittman, D.H.

    1997-04-01

    With recent advances in the automation of the model-building process for large networks of kinetic equations, it may become feasible to generate computer pyrolysis models for naphthas and gas oil feedstocks. The potential benefit of a rigorous mechanistic model for these relatively complex liquid feedstocks is great, due to diverse characterizations and yield spectrums. An ethane pyrolysis example is used to illustrate the computer generation of reaction mechanism models.

  18. A computational model of the human hand 93-ERI-053

    SciTech Connect

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  19. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  20. Bringing computational models of bone regeneration to the clinic.

    PubMed

    Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans

    2015-01-01

    Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs.

  1. Computer Modeling and Research in the Classroom

    ERIC Educational Resources Information Center

    Ramos, Maria Joao; Fernandes, Pedro Alexandrino

    2005-01-01

    We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…

  2. Computer Modeling and Research in the Classroom

    ERIC Educational Resources Information Center

    Ramos, Maria Joao; Fernandes, Pedro Alexandrino

    2005-01-01

    We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…

  3. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  4. Structure, function, and behaviour of computational models in systems biology

    PubMed Central

    2013-01-01

    Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research

  5. Computer modeling of ORNL storage tank sludge mobilization and mixing

    SciTech Connect

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  6. Computational modeling in melanoma for novel drug discovery.

    PubMed

    Pennisi, Marzio; Russo, Giulia; Di Salvatore, Valentina; Candido, Saverio; Libra, Massimo; Pappalardo, Francesco

    2016-06-01

    There is a growing body of evidence highlighting the applications of computational modeling in the field of biomedicine. It has recently been applied to the in silico analysis of cancer dynamics. In the era of precision medicine, this analysis may allow the discovery of new molecular targets useful for the design of novel therapies and for overcoming resistance to anticancer drugs. According to its molecular behavior, melanoma represents an interesting tumor model in which computational modeling can be applied. Melanoma is an aggressive tumor of the skin with a poor prognosis for patients with advanced disease as it is resistant to current therapeutic approaches. This review discusses the basics of computational modeling in melanoma drug discovery and development. Discussion includes the in silico discovery of novel molecular drug targets, the optimization of immunotherapies and personalized medicine trials. Mathematical and computational models are gradually being used to help understand biomedical data produced by high-throughput analysis. The use of advanced computer models allowing the simulation of complex biological processes provides hypotheses and supports experimental design. The research in fighting aggressive cancers, such as melanoma, is making great strides. Computational models represent the key component to complement these efforts. Due to the combinatorial complexity of new drug discovery, a systematic approach based only on experimentation is not possible. Computational and mathematical models are necessary for bringing cancer drug discovery into the era of omics, big data and personalized medicine.

  7. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    2006-09-01

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  8. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  9. Implementing and assessing computational modeling in introductory mechanics

    NASA Astrophysics Data System (ADS)

    Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.

    2012-12-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated with a proctored assignment involving a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation, and the implications for computational instruction in introductory science, technology, engineering, and mathematics (STEM) courses.

  10. Computer Aided Modeling and Post Processing with NASTRAN Analysis

    NASA Technical Reports Server (NTRS)

    Boroughs, R. R.

    1984-01-01

    Computer aided engineering systems are invaluable tools in performing NASTRAN finite element analysis. These techniques are implemented in both the pre-processing and post-processing phases of the NASTRAN analysis. The finite element model development, or pre-processing phase, was automated with a computer aided modeling program called Supertabl, and the review and interpretation of the results of the NASTRAN analysis, or post-processing phase, was automated with a computer aided plotting program called Output Display. An intermediate program, Nasplot, which was developed in-house, has also helped to cut down on the model checkout time and reduce errors in the model. An interface has been established between the finite element computer aided engineering system and the Learjet computer aided design system whereby data can be transferred back and forth between the two. These systems have significantly improved productivity and the ability to perform NASTRAN analysis in response to product development requests.

  11. Ground Motion Models and Computer Techniques

    DTIC Science & Technology

    1972-04-01

    tectonic stress-strain distributions induced by changing the pore witer pressure. A general computer subroutine (TAMEOS) is described which...interactions, material phase changes , and dependence of strength parameters on the thermodynamic state. This report describes improved techniques...stretches, 1^ = An A? (3.11) ij = In X? (3.12) we find X. = X? X? . (3.13) 55 It is shown in Ref. 24 that the rate of change of compression, ö

  12. Computer Modeling for Optical Waveguide Sensors.

    DTIC Science & Technology

    1987-12-15

    COSATI CODES 18 SUBJECT TERMS (Continue on reverse it necessary and cleritify by DIock numnerl FIEL GRUP SB-GOUP Optical waveguide sensors Computer...reflection. The resultant probe beam transmission may be plotted as a function of changes in the refractive index of the surrounding fluid medium. BASIC...all angles of incidence about the critical angle ecr. It should be noted that N in equation (3) is a function of e, since = sin - l sin 8 , see

  13. The Effects of Teacher versus Computer Reading Models.

    ERIC Educational Resources Information Center

    Dawson, Leisa; Venn, Martha L.; Gunter, Philip L.

    2000-01-01

    The effects of three conditions (no model, a teacher-presented reading model, and a computer-presented reading model) on the reading of four students with emotional or behavioral disorders, found the teacher model resulted in the greatest number of words read correctly per minute and the greatest percentage of words read correctly. (Contains…

  14. Operation of the computer model for microenvironment atomic oxygen exposure

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  15. A stirling engine computer model for performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R.; Jefferies, K.; Miao, D.

    1978-01-01

    To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.

  16. Rasch Model Analysis with the BICAL Computer Program

    DTIC Science & Technology

    1976-09-01

    COVERED RASCH MODEL ANALYSIS WITH THE BICAL Interim COMPUTER PROGRAM 6. PERFORMING ORG. REPORT NUMBER 5. AUTHORe) S. CONTRACTOR GRANT NUMBER() Benjamin D...PASECU~i" Da t eI Research Note 82-24 RASCH MODEL ANALYSIS WITH THE BICAL COMPUTER PROGRAM Benjamin D. Wright and Ronald J. Mead The University of...meet the requirements of objective measurement. The Rasch Model is the mathematical formulation of any measurement situation, either physical or

  17. Patentability aspects of computational cancer models

    NASA Astrophysics Data System (ADS)

    Lishchuk, Iryna

    2017-07-01

    Multiscale cancer models, implemented in silico, simulate tumor progression at various spatial and temporal scales. Having the innovative substance and possessing the potential of being applied as decision support tools in clinical practice, patenting and obtaining patent rights in cancer models seems prima facie possible. What legal hurdles the cancer models need to overcome for being patented we inquire from this paper.

  18. A Theory-Based Computer Tutorial Model.

    ERIC Educational Resources Information Center

    Dixon, Robert C.; Clapp, Elizabeth J.

    Because of the need for models to illustrate some possible answers to practical courseware development questions, a specific, three-section model incorporating the Corrective Feedback Paradigm (PCP) is advanced for applying theory to courseware. The model is reconstructed feature-by-feature against a framework of a hypothetical, one-to-one,…

  19. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  20. Computational technology of multiscale modeling the gas flows in microchannels

    NASA Astrophysics Data System (ADS)

    Podryga, V. O.

    2016-11-01

    The work is devoted to modeling the gas mixture flows in engineering microchannels under conditions of many scales of computational domain. The computational technology of using the multiscale approach combining macro - and microscopic models is presented. At macrolevel the nature of the flow and the external influence on it are considered. As a model the system of quasigasdynamic equations is selected. At microlevel the correction of gasdynamic parameters and the determination of boundary conditions are made. As a numerical model the Newton's equations and the molecular dynamics method are selected. Different algorithm types used for implementation of multiscale modeling are considered. The results of the model problems for separate stages are given.

  1. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    PubMed

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  2. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  3. Computational Modeling of NEXT 2000-Hour Wear Test Results

    NASA Technical Reports Server (NTRS)

    Malone, Shane P.

    2004-01-01

    Ion optics computational models are invaluable tools for the design of ion optics systems. In this study, a new computational model developed by an outside vendor for NASA Glenn Research Center (GRC) is presented. This model is a gun code which has been modified to model the plasma sheaths both upstream and downstream of the ion optics. The model handles multiple species (e.g. singly and doubly-charged ions) and includes a charge-exchange model for erosion estimates. The model uses commercially available solid design and meshing software, allowing high flexibility in ion optics geometric configurations. This computational model is compared to experimental results from the NASA Evolutionary Xenon Thruster (NEXT) 2000-hour wear test, including over-focusing along the edge apertures, pit-and-groove erosion due to charge exchange, and beamlet distortion at the edge of the hole pattern.

  4. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    NASA Astrophysics Data System (ADS)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  5. Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.

    2011-01-01

    This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.

  6. Dynamic Stall Computations Using a Zonal Navier-Stokes Model

    DTIC Science & Technology

    1988-06-01

    COMPUTATIONS USING A ZONAL NAVIER-STOKES MODEL OfOSONA, AUTWOR(S) Conrovd, Jack H. r. __ _ I, ,3 , iOR co T’M( COVERED DATE Of REPORT (Yea, Month Oy) IS PAGE...48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface pressure distributions and integrated...lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational

  7. Computational Psychometrics for Modeling System Dynamics during Stressful Disasters.

    PubMed

    Cipresso, Pietro; Bessi, Alessandro; Colombo, Desirée; Pedroli, Elisa; Riva, Giuseppe

    2017-01-01

    Disasters can be very stressful events. However, computational models of stress require data that might be very difficult to collect during disasters. Moreover, personal experiences are not repeatable, so it is not possible to collect bottom-up information when building a coherent model. To overcome these problems, we propose the use of computational models and virtual reality integration to recreate disaster situations, while examining possible dynamics in order to understand human behavior and relative consequences. By providing realistic parameters associated with disaster situations, computational scientists can work more closely with emergency responders to improve the quality of interventions in the future.

  8. A qualitative model for computer-assisted instruction in cardiology.

    PubMed Central

    Julen, N.; Siregar, P.; Sinteff, J. P.; Le Beux, P.

    1998-01-01

    CARDIOLAB is an interactive computational framework dedicated to teaching and computer-aided diagnosis in cardiology. The framework embodies models that simulate the heart's electrical activity. They constitute the core of a Computer-Assisted Instruction (CAI) program intended to teach, in a multimedia environment, the concepts underlying rhythmic disorders and cardiac diseases. The framework includes a qualitative model (QM) which is described in this paper. During simulation using QM, dynamic sequences representing impulse formation and conduction processes are produced along with the corresponding qualitative descriptions. The corresponding electrocardiogram (ECG) and ladder diagram are also produced, and thus, both qualitative notions and quantitative facts can be taught via the model. We discuss how qualitative models in particular, and computational models in general can enhance the teaching capability of CAI programs. Images Figure 1 Figure 2 PMID:9929258

  9. Multiscale Modeling in Computational Biomechanics: Determining Computational Priorities and Addressing Current Challenges

    SciTech Connect

    Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.; Erdemir, Ahmet; Guess, Trent; Reinbolt, Jeff

    2009-05-01

    Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.

  10. Modeling Trait Anxiety: From Computational Processes to Personality

    PubMed Central

    Raymond, James G.; Steele, J. Douglas; Seriès, Peggy

    2017-01-01

    Computational methods are increasingly being applied to the study of psychiatric disorders. Often, this involves fitting models to the behavior of individuals with subclinical character traits that are known vulnerability factors for the development of psychiatric conditions. Anxiety disorders can be examined with reference to the behavior of individuals high in “trait” anxiety, which is a known vulnerability factor for the development of anxiety and mood disorders. However, it is not clear how this self-report measure relates to neural and behavioral processes captured by computational models. This paper reviews emerging computational approaches to the study of trait anxiety, specifying how interacting processes susceptible to analysis using computational models could drive a tendency to experience frequent anxious states and promote vulnerability to the development of clinical disorders. Existing computational studies are described in the light of this perspective and appropriate targets for future studies are discussed. PMID:28167920

  11. Model identification in computational stochastic dynamics using experimental modal data

    NASA Astrophysics Data System (ADS)

    Batou, A.; Soize, C.; Audebert, S.

    2015-01-01

    This paper deals with the identification of a stochastic computational model using experimental eigenfrequencies and mode shapes. In the presence of randomness, it is difficult to construct a one-to-one correspondence between the results provided by the stochastic computational model and the experimental data because of the random modes crossing and veering phenomena that may occur from one realization to another one. In this paper, this correspondence is constructed by introducing an adapted transformation for the computed modal quantities. Then the transformed computed modal quantities can be compared with the experimental data in order to identify the parameters of the stochastic computational model. The methodology is applied to a booster pump of thermal units for which experimental modal data have been measured on several sites.

  12. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  13. Evaluating a Computational Model of Emotion

    DTIC Science & Technology

    2006-01-01

    psychological (although this may change with the rise of neuroscience ). Simulation-driven models vary considerably from simplistic approaches that require...functions. Some have tried to faithfully model what is known about the neuroscience of emotion to give better insight into these processes. For...34Lifelike Pedagogical Agents for Mixed-Initiative Problem Solving in Constructivist Learning Environments," User Modeling and User-Adapted Instruction, vol

  14. Super-Micro Computer Weather Prediction Model

    DTIC Science & Technology

    1990-06-01

    model equations 2 b. Grid domain and horizontal nesting 5 c. Time integration and outer lateral boundary condition 8 d. Coupling of the model with the...c. Eddy diffusion sensitivity tests 36 4. Domain for Prototype testing 39 5 . Comparison of the Boundary-Layer Parameterizations - -__ With the...including radiation calculations, with other boundary layer work will be presented in section 5 , and the report concludes witb section 6. 2. Model

  15. Computational modeling in cognitive science: a manifesto for change.

    PubMed

    Addyman, Caspar; French, Robert M

    2012-07-01

    Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces.  For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. Copyright © 2012 Cognitive Science Society, Inc.

  16. Overview of Computer Simulation Modeling Approaches and Methods

    Treesearch

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  17. Computer Simulation (Microcultures): An Effective Model for Multicultural Education.

    ERIC Educational Resources Information Center

    Nelson, Jorge O.

    This paper presents a rationale for using high-fidelity computer simulation in planning for and implementing effective multicultural education strategies. Using computer simulation, educators can begin to understand and plan for the concept of cultural sensitivity in delivering instruction. The model promises to emphasize teachers' understanding…

  18. A model for computing at the SSC (Superconducting Super Collider)

    SciTech Connect

    Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)

    1990-06-01

    High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.

  19. Computer Mediated Social Justice: A New Model for Educators.

    ERIC Educational Resources Information Center

    Tettegah, Sharon

    2002-01-01

    Introduces a new model for analyzing teachers' conversations in computer-mediate communication (CMC) based on information from Bakhtin (1981), Freire (1993), social identity theory, psychological capital, cultural consciousness, and CMC theoretical frameworks. Considers CMC and human-computer interaction (HCI) to address cultural differences that…

  20. Computational Morphodynamics: A modeling framework to understand plant growth

    PubMed Central

    Chickarmane, Vijay; Roeder, Adrienne H.K.; Tarr, Paul T.; Cunha, Alexandre; Tobin, Cory; Meyerowitz, Elliot M.

    2014-01-01

    Computational morphodynamics utilizes computer modeling to understand the development of living organisms over space and time. Results from biological experiments are used to construct accurate and predictive models of growth. These models are then used to make novel predictions providing further insight into the processes in question, which can be tested experimentally to either confirm or rule out the validity of the computational models. This review highlights two fundamental issues: (1.) models should span and integrate single cell behavior with tissue development and (2.) the necessity to understand the feedback between mechanics of growth and chemical or molecular signaling. We review different approaches to model plant growth and discuss a variety of model types that can be implemented, with the aim of demonstrating how this methodology can be used, to explore the morphodynamics of plant development. PMID:20192756

  1. Several Computational Opportunities and Challenges Associated with Climate Change Modeling

    SciTech Connect

    Wang, Dali; Post, Wilfred M; Wilson, Bruce E

    2010-01-01

    One of the key factors in the improved understanding of climate science is the development and improvement of high fidelity climate models. These models are critical for projections of future climate scenarios, as well as for highlighting the areas where further measurement and experimentation are needed for knowledge improvement. In this paper, we focus on several computing issues associated with climate change modeling. First, we review a fully coupled global simulation and a nested regional climate model to demonstrate key design components, and then we explain the underlying restrictions associated with the temporal and spatial scale for climate change modeling. We then discuss the role of high-end computers in climate change sciences. Finally, we explain the importance of fostering regional, integrated climate impact analysis. Although we discuss the computational challenges associated with climate change modeling, and we hope those considerations can also be beneficial to many other modeling research programs involving multiscale system dynamics.

  2. Cancer Evolution: Mathematical Models and Computational Inference

    PubMed Central

    Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian

    2015-01-01

    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804

  3. Computer modeling of tactical high frequency antennas

    NASA Astrophysics Data System (ADS)

    Gregory, Bobby G., Jr.

    1992-06-01

    The purpose of this thesis was to compare the performance of three tactical high frequency antennas to be used as possible replacement for the Tactical Data Communications Central (TDCC) antennas. The antennas were modeled using the Numerical Electromagnetics Code, Version 3 (NEC3), and the Eyring Low Profile and Buried Antenna Modeling Program (PAT7) for several different frequencies and ground conditions. The performance was evaluated by comparing gain at the desired takeoff angles, the voltage standing wave ratio of each antenna, and its omni-directional capability. The buried antenna models, the ELPA-302 and horizontal dipole, were most effective when employed over poor ground conditions. The best performance under all conditions tested was demonstrated by the HT-20T. Each of these antennas have tactical advantages and disadvantages and can optimize communications under certain conditions. The selection of the best antenna is situation dependent. An experimental test of these models is recommended to verify the modeling results.

  4. Computational modeling and engineering in pediatric and congenital heart disease

    PubMed Central

    Marsden, Alison L.; Feinstein, Jeffrey A.

    2015-01-01

    Purpose of review Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single ventricle patients, and provide an overview of emerging areas. Recent Findings Multiscale modeling combining patient specific hemodynamics with reduced order (i.e. mathematically and computationally simplified) circulatory models has become the defacto standard for modeling local hemodynamics and “global” circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods, (e.g. fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally-derived surgical methods for single ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot, (and pulmonary tree), and circulatory support. Summary Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases. PMID:26262579

  5. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  6. An analysis of symbolic linguistic computing models in decision making

    NASA Astrophysics Data System (ADS)

    Rodríguez, Rosa M.; Martínez, Luis

    2013-01-01

    It is common that experts involved in complex real-world decision problems use natural language for expressing their knowledge in uncertain frameworks. The language is inherent vague, hence probabilistic decision models are not very suitable in such cases. Therefore, other tools such as fuzzy logic and fuzzy linguistic approaches have been successfully used to model and manage such vagueness. The use of linguistic information implies to operate with such a type of information, i.e. processes of computing with words (CWW). Different schemes have been proposed to deal with those processes, and diverse symbolic linguistic computing models have been introduced to accomplish the linguistic computations. In this paper, we overview the relationship between decision making and CWW, and focus on symbolic linguistic computing models that have been widely used in linguistic decision making to analyse if all of them can be considered inside of the CWW paradigm.

  7. Computational Modeling, Formal Analysis, and Tools for Systems Biology

    PubMed Central

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science. PMID:26795950

  8. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  9. Computer models and output, Spartan REM: Appendix B

    NASA Technical Reports Server (NTRS)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.

  10. COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION

    EPA Science Inventory

    In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.

  11. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  12. Enhanced absorption cycle computer model. Final report

    SciTech Connect

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.

  13. Computer modeling of high intensity solar cells

    NASA Astrophysics Data System (ADS)

    Gray, J. L.; Lundstrom, M. S.; Schwartz, R. J.

    1987-01-01

    The purpose of this program is to provide general analytic support to Sandia National Laboratories' effort to develop high efficiency, high concentration solar cells. This report covers work performed between November 5, 1984, and December 31, 1985, and includes reprints of three papers presented at the 18th IEEE Photovoltaic Specialists' Conference. In the first paper, the factors that presently prevent achieving the predicted theoretical efficiencies (in excess of 30% at concentration) are examined. It is demonstrated, by two-dimensional computer simulations, that these efficiencies might be obtained by improved light trapping techniques and by fabrication of low resistance heteroface contacts. The second paper examines the Rose-Weaver lifetime and surface recombination velocity measurement technique. It is shown that the very small uncertainties in the measured quantities lead to large uncertainties in the computed lifetime and surface recombination velocity. This leads to radically different interpretations of how the recombination is distributed throughout the device, and therefore limits the usefulness of the measurement technique. Design options and constraints of GaAs concentrator cells are examined in the third paper. The effectiveness of various design options is assessed. It is shown that although such design options are of little use in increasing the efficiency of heteroface cells, they can improve the efficiency of shallow junction cells so that it is comparable to that of heteroface cells, In addition, documentation describing the use of both the one- and two-dimensional silicon codes, SCAP1D and SCAP2D, as well as the one-dimensional AlGaAs solar cell simulation code is included.

  14. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  15. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  16. A computational model for a regenerator

    NASA Technical Reports Server (NTRS)

    Gary, J.; Daney, D. E.; Radebaugh, R.

    1985-01-01

    This paper concerns a numerical model of a regenerator running at very low temperatures. The model consists of the usual three equations for a compressible fluid with an additional equation for a matrix temperature. The main difficulty with the model is the very low Mach number (approximately 1.E-3). The divergence of the velocity is not small, the pressure divergence is small, and the pressure fluctuation in time is not small. An asymptotic expansion based on the bounded derivative method of Kreiss is used to give a reduced model which eliminates acoustic waves. The velocity is then determined by a two-point boundary value problem which does not contain a time derivative. The solution obtained from the reduced system is compared with the numerical solution of the original system.

  17. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  18. Supersonic jet and crossflow interaction: Computational modeling

    NASA Astrophysics Data System (ADS)

    Hassan, Ez; Boles, John; Aono, Hikaru; Davis, Douglas; Shyy, Wei

    2013-02-01

    The supersonic jet-in-crossflow problem which involves shocks, turbulent mixing, and large-scale vortical structures, requires special treatment for turbulence to obtain accurate solutions. Different turbulence modeling techniques are reviewed and compared in terms of their performance in predicting results consistent with the experimental data. Reynolds-averaged Navier-Stokes (RANS) models are limited in prediction of fuel structure due to their inability to accurately capture unsteadiness in the flow. Large eddy simulation (LES) is not yet practical due to prohibitively large grid requirement near the wall. Hybrid RANS/LES can offer reasonable compromise between accuracy and efficiency. The hybrid models are based on various approaches such as explicit blending of RANS and LES, detached eddy simulation (DES), and filter-based multi-scale models. In particular, they can be used to evaluate the turbulent Schmidt number modeling techniques used in jet-in-crossflow simulations. Specifically, an adaptive approach can be devised by utilizing the information obtained from the resolved field to help assign the value of turbulent Schmidt number in the sub-filter field. The adaptive approach combined with the multi-scale model improves the results especially when highly refined grids are needed to resolve small structures involved in the mixing process.

  19. Computational modelling of the impact of AIDS on business.

    PubMed

    Matthews, Alan P

    2007-07-01

    An overview of computational modelling of the impact of AIDS on business in South Africa, with a detailed description of the AIDS Projection Model (APM) for companies, developed by the author, and suggestions for further work. Computational modelling of the impact of AIDS on business in South Africa requires modelling of the epidemic as a whole, and of its impact on a company. This paper gives an overview of epidemiological modelling, with an introduction to the Actuarial Society of South Africa (ASSA) model, the most widely used such model for South Africa. The APM produces projections of HIV prevalence, new infections, and AIDS mortality on a company, based on the anonymous HIV testing of company employees, and projections from the ASSA model. A smoothed statistical model of the prevalence test data is computed, and then the ASSA model projection for each category of employees is adjusted so that it matches the measured prevalence in the year of testing. FURTHER WORK: Further techniques that could be developed are microsimulation (representing individuals in the computer), scenario planning for testing strategies, and models for the business environment, such as models of entire sectors, and mapping of HIV prevalence in time and space, based on workplace and community data.

  20. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    ERIC Educational Resources Information Center

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  1. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    ERIC Educational Resources Information Center

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  2. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  3. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI

  4. Revised OPTSA Model. Volume 2. Computer Program Documentation

    DTIC Science & Technology

    1975-06-01

    UNGUSSIFIED »gCUWITV CLAltlFICATIOM QW THII Pkal fWtitm 0*l« Cn)«rf« REPORT DOCUMENTATION PAGE P-1111 2 . OOVT ACCCUION NO 4. TITLE rand...SufcrlrUJ REVISED OPTSA MODEL Volume 2 : Computer Program Documentation 7. AUTMOUCt; Lowell Bruce Anderson Jerome Bracken Eleanor L. Schwartz t...ATC SCHOOL MawreiiieY. GALIFORNIA 93940 PAPER P-1111 REVISED OPTSA MODEL Volume 2 : Computer Program Documentation Lowell Bruce Anderson

  5. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  6. Computational model of miniature pulsating heat pipes

    SciTech Connect

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  7. Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2010-10-01

    Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.

  8. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.

  9. Computational Modeling and Simulation of Developmental ...

    EPA Pesticide Factsheets

    SYNOPSIS: The question of how tissues and organs are shaped during development is crucial for understanding human birth defects. Data from high-throughput screening assays on human stem cells may be utilized predict developmental toxicity with reasonable accuracy. Other types of models are necessary, however, for mechanism-specific analysis because embryogenesis requires precise timing and control. Agent-based modeling and simulation (ABMS) is an approach to virtually reconstruct these dynamics, cell-by-cell and interaction-by-interaction. Using ABMS, HTS lesions from ToxCast can be integrated with patterning systems heuristically to propagate key events This presentation to FDA-CFSAN will update progress on the applications of in silico modeling tools and approaches for assessing developmental toxicity.

  10. Computational Models for Mechanics of Morphogenesis

    PubMed Central

    Wyczalkowski, Matthew A.; Chen, Zi; Filas, Benjamen A.; Varner, Victor D.; Taber, Larry A.

    2012-01-01

    In the developing embryo, tissues differentiate, deform, and move in an orchestrated manner to generate various biological shapes driven by the complex interplay between genetic, epigenetic, and environmental factors. Mechanics plays a key role in regulating and controlling morphogenesis, and quantitative models help us understand how various mechanical forces combine to shape the embryo. Models allow for the quantitative, unbiased testing of physical mechanisms, and when used appropriately, can motivate new experimental directions. This knowledge benefits biomedical researchers who aim to prevent and treat congenital malformations, as well as engineers working to create replacement tissues in the laboratory. In this review, we first give an overview of fundamental mechanical theories for morphogenesis, and then focus on models for specific processes, including pattern formation, gastrulation, neurulation, organogenesis, and wound healing. The role of mechanical feedback in development is also discussed. Finally, some perspectives are given on the emerging challenges in morphomechanics and mechanobiology. PMID:22692887

  11. Trusting explanatory and exploratory models in computational geomorphology

    NASA Astrophysics Data System (ADS)

    Van De Wiel, Marco; Desjardins, Eric; Rousseau, Yannick; Martel, Tristan; Ashmore, Peter

    2014-05-01

    Computer simulations have become an increasingly important part of geomorphological investigation in the last decades. Simulations can be used not only to make specific predictions of the evolution of a geomorphic system (predictive modelling), but also to test theories and learn about geomorphic form and process in a timely and non destructive way (explanatory and exploratory modelling). The latter modes of modelling can be very useful for discovering spatial and temporal patterns, developing insights in the relation between form and process, and for understanding the causal structure of the physical landscape. But before we can have any hope that these type of simulations can effectively accomplish these tasks, simulationists must make the case that their computer modelling goes beyond mere numerical computation of theoretical idealization; that geomorphic investigation through computer modelling can play a similar role as field observation or laboratory experiment. Many of these explanatory and exploratory models are reduced-complexity models which exhibit a high degree of idealization and simplification. Moreover, they are often uncalibrated and untested on real geomorphic systems. Indeed, they are often used on idealized hypothetical landscapes, and sometimes are acknowledged not to be suitable for simulation of real systems. Does it make sense then to conceive of this type of computer modelling as a form of investigation capable of providing reliable knowledge about actual geomorphological phenomena? In this analysis it is argued that the traditional notion of establishing reliability or trustworthiness of models, i.e. a confirmation of predictive ability with regards to observed data, is not applicable to explanatory or exploratory modelling. Instead, trustworthiness of these models is established through broad qualitative conformity with known system dynamics, followed by a posteriori field and laboratory testing of hypotheses generated from the modelling

  12. Computer models to study uterine activation at labour.

    PubMed

    Sharp, G C; Saunders, P T K; Norman, J E

    2013-11-01

    Improving our understanding of the initiation of labour is a major aim of modern obstetric research, in order to better diagnose and treat pregnant women in which the process occurs abnormally. In particular, increased knowledge will help us identify the mechanisms responsible for preterm labour, the single biggest cause of neonatal morbidity and mortality. Attempts to improve our understanding of the initiation of labour have been restricted by the inaccessibility of gestational tissues to study during pregnancy and at labour, and by the lack of fully informative animal models. However, computer modelling provides an exciting new approach to overcome these restrictions and offers new insights into uterine activation during term and preterm labour. Such models could be used to test hypotheses about drugs to treat or prevent preterm labour. With further development, an effective computer model could be used by healthcare practitioners to develop personalized medicine for patients on a pregnancy-by-pregnancy basis. Very promising work is already underway to build computer models of the physiology of uterine activation and contraction. These models aim to predict changes and patterns in uterine electrical excitation during term labour. There have been far fewer attempts to build computer models of the molecular pathways driving uterine activation and there is certainly scope for further work in this area. The integration of computer models of the physiological and molecular mechanisms that initiate labour will be particularly useful.

  13. Computational Modeling for Language Acquisition: A Tutorial With Syntactic Islands.

    PubMed

    Pearl, Lisa S; Sprouse, Jon

    2015-06-01

    Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. We provide a general overview of why modeling can be a particularly informative tool and some general considerations when creating a computational acquisition model. We then review a concrete example of a computational acquisition model for complex structural knowledge referred to as syntactic islands. This includes an overview of syntactic islands knowledge, a precise definition of the acquisition task being modeled, the modeling results, and how to meaningfully interpret those results in a way that is relevant for questions about knowledge representation and the learning process. Computational modeling is a powerful tool that can be used to understand linguistic development. The general approach presented here can be used to investigate any acquisition task and any learning strategy, provided both are precisely defined.

  14. Computational social network modeling of terrorist recruitment.

    SciTech Connect

    Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

  15. Computer modeling of electrical performance of detonators

    SciTech Connect

    Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.; Lyons, G.R.

    1995-05-01

    An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.

  16. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  17. Computer Models Simulate Fine Particle Dispersion

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  18. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  19. A Computer Model for Direct Carbonate Fuel Cells

    SciTech Connect

    Ding, J.; Patel, P.S.; Farooque, M.; Maru, H.C.

    1997-04-01

    A 3-D computer model, describing fluid flow, heat and mass transfer, and chemical and electrochemical reaction processes, has been developed for guiding the direct carbonate fuel cell (DFC) stack design. This model is able to analyze the direct internal reforming (DIR) as well as the integrated IIR (indirect internal reforming)-DIR designs. Reasonable agreements between computed and fuel cell tested results, such as flow variations, temperature distributions, cell potentials, and exhaust gas compositions as well as methane conversions, were obtained. Details of the model and comparisons of the modeling results with experimental DFC stack data are presented in the paper.

  20. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  1. Computer Model Simulates Air Pollution Over Roads

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1972

    1972-01-01

    A sophisticated modeling technique which predicts pollutant movement accurately and may aid in the design of new freeways is reported. EXPLOR (Examination of Pollution Levels of Roadways) was developed specifically to predict pollutant concentrations in a milewide corridor traversed by a roadway. (BL)

  2. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  3. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  4. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  5. Interactive computational models of particle dynamics using virtual reality

    SciTech Connect

    Canfield, T.; Diachin, D.; Freitag, L.; Heath, D.; Herzog, J.; Michels, W.

    1996-12-31

    An increasing number of industrial applications rely on computational models to reduce costs in product design, development, and testing cycles. Here, the authors discuss an interactive environment for the visualization, analysis, and modification of computational models used in industrial settings. In particular, they focus on interactively placing massless, massed, and evaporating particulate matter in computational fluid dynamics applications.they discuss the numerical model used to compute the particle pathlines in the fluid flow for display and analysis. They briefly describe the toolkits developed for vector and scalar field visualization, interactive particulate source placement, and a three-dimensional GUI interface. This system is currently used in two industrial applications, and they present the tools in the context of these applications. They summarize the current state of the project and offer directions for future research.

  6. Computational challenges in modeling and simulating living matter

    NASA Astrophysics Data System (ADS)

    Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling

    2016-12-01

    Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.

  7. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  8. Images as drivers of progress in cardiac computational modelling.

    PubMed

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A; Bishop, Martin J; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2014-08-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved.

  9. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.

  10. Instability phenomena in plasticity: Modelling and computation

    NASA Astrophysics Data System (ADS)

    Stein, E.; Steinmann, P.; Miehe, C.

    1995-12-01

    We presented aspects and results related to the broad field of strain localization with special focus on large strain elastoplastic response. Therefore, we first re-examined issues related to the classification of discontinuities and the classical description of localization with a particular emphasis on an Eulerian geometric representation. We touched the problem of mesh objectivity and discussed results of a particular regularization method, namely the micropolar approach. Generally, regularization has to preserve ellipticity and to reflect the underlying physics. For example ductile materials have to be modelled including viscous effects whereas geomaterials are adequately described by the micropolar approach. Then we considered localization phenomena within solids undergoing large strain elastoplastic deformations. Here, we documented the influence of isotropic damage on the failure analysis. Next, the interesting influence of an orthotropic yield condition on the spatial orientation of localized zones has been studied. Finally, we investigated the localization condition for an algorithmic model of finite strain single crystal plasticity.

  11. Computer models for amorphous silicon hydrides

    NASA Astrophysics Data System (ADS)

    Mousseau, Normand; Lewis, Laurent J.

    1990-02-01

    A procedure for generating fully coordinated model structures appropriate to hydrogenated amorphous semiconductors is described. The hydrogen is incorporated into an amorphous matrix using a bond-switching process similar to that proposed by Wooten, Winer, and Weaire, which ensures that fourfold coordination is preserved. After each inclusion of hydrogen, the structure is relaxed using a finite-temperature Monte Carlo algorithm. The method is applied to a-Si:H at various hydrogen concentrations. The resulting model structures are found to be in excellent agreement with recent neutron-scattering measurements on a sample with 12 at. % H. Our prescription, which is essentially nonlocal, allows great flexibility and can easily be extended to related systems.

  12. Hearing dummies: individualized computer models of hearing impairment.

    PubMed

    Panda, Manasa R; Lecluyse, Wendy; Tan, Christine M; Jürgens, Tim; Meddis, Ray

    2014-10-01

    Objective: Our aim was to explore the usage of individualized computer models to simulate hearing loss based on detailed psychophysical assessment and to offer hypothetical diagnoses of the underlying pathology. Individualized computer models of normal and impaired hearing were constructed and evaluated using the psychophysical data obtained from human listeners. Computer models of impaired hearing were generated to reflect the hypothesized underlying pathology (e.g. dead regions, outer hair cell dysfunction, or reductions in endocochlear potential). These models were evaluated in terms of their ability to replicate the original patient data. Auditory profiles were measured for two normal and five hearing-impaired listeners using a battery of three psychophysical tests (absolute thresholds, frequency selectivity, and compression). The individualized computer models were found to match the data. Useful fits to the impaired profiles could be obtained by changing only a single parameter in the model of normal hearing. Sometimes, however, it was necessary to include an additional dead region. The creation of individualized computer models of hearing loss can be used to simulate auditory profiles of impaired listeners and suggest hypotheses concerning the underlying peripheral pathology.

  13. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  14. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  15. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    PubMed

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2017-08-10

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  16. Computational load in model physics of the parallel NCAR community climate model

    SciTech Connect

    Michalakes, J.G.; Nanjundiah, R.S.

    1994-11-01

    Maintaining a balance of computational load over processors is a crucial issue in parallel computing. For efficient parallel implementation, complex codes such as climate models need to be analyzed for load imbalances. In the present study we focus on the load imbalances in the physics portion of the community climate model`s (CCM2) distributed-memory parallel implementation on the Intel Touchstone DELTA computer. We note that the major source of load imbalance is the diurnal variation in the computation of solar radiation. Convective weather patterns also cause some load imbalance. Land-ocean contrast is seen to have little effect on computational load in the present version of the model.

  17. Optical Computing Based on Neuronal Models.

    DTIC Science & Technology

    1987-10-01

    Tikhonov , A.N. and V.Y. Arsenin , " Solutions of Ill - Posed Problems ", Winston and Sons, Washington, D.C. 1977 . 11. Poggio, T. and C. Koch, " Ill - Posed ...describe a solution to this problem and to use the solu- tion as a vehicle for pointing out the distinctive features of the neural net model approach to... ill - posedness [11]. The brain’s associative memory capabilities where nearest neighbor searches are performed successfully

  18. Computational Modeling of Chemical Protective Clothing

    DTIC Science & Technology

    2003-11-19

    Integration With FLUENT Grid File Gambit (Preprocessor) CAD Geometry File Case File Data FileFLUENT Fabric Model User Interface Fabric Clothing and...Momentum/Energy Equations – Volume-Averaged Porous Media Approach – Extend Commercial FLUENT ® Software for Vapor/Liquid Physics in Fabric Copyright...Copyright © 2003 Creare Incorporated An unpublished work. All rights reserved.MTG-03-11-1777/4215/6 FLUENT Solves Governing Mass/Momentum/Energy Equations

  19. Computer Modeling of Complete IC Fabrication Process.

    DTIC Science & Technology

    1984-01-01

    Venson Shaw 10. C. S. Chang 11. Elizabeth Batson 12. Richard Pinto 13. Jacques Beauduoin SPEAKERS: 1. Tayo Akinwande 2. Dimitri Antoniadis 3. Walter...Numerical Model of Polysilicon Emitter Contacts in Bipolar Transistors,’ To be published IEEE Trans. Electron Devices. [34] M. R. Pinto , R. W. Dutton...Received PhD, Spring 1082) Balaji Swaminathan (Received PhD, Spring 1983) Len Mei Research Associate Michael Kump Research Assistant Mark Pinto Research

  20. Computational modeling of nuclear thermal rockets

    NASA Technical Reports Server (NTRS)

    Peery, Steven D.

    1993-01-01

    The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.

  1. Computational modeling of nonlinear electromagnetic phenomena

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Taflove, Allen

    1992-01-01

    A new algorithm has been developed that permits, for the first time, the direct time integration of the full-vector nonlinear Maxwell's equations. This new capability permits the modeling of linear and nonlinear, instantaneous and dispersive effects in the electric polarization material media. Results are presented of first-time calculations in 1D of the propagation and collision of femtosecond electromagnetic solitons that retain the optical carrier.

  2. Computational Modeling of Lipid Metabolism in Yeast

    PubMed Central

    Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda

    2016-01-01

    Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism. PMID:27730126

  3. Computational Modeling and Simulation of Developmental ...

    EPA Pesticide Factsheets

    Developmental and Reproductive Toxicity (DART) testing is important for assessing the potential consequences of drug and chemical exposure on human health and well-being. Complexity of pregnancy and the reproductive cycle makes DART testing challenging and costly for traditional (animal-based) methods. A compendium of in vitro data from ToxCast/Tox21 high-throughput screening (HTS) programs is available for predictive toxicology. ‘Predictive DART’ will require an integrative strategy that mobilizes HTS data into in silico models that capture the relevant embryology. This lecture addresses progress on EPA's 'virtual embryo'. The question of how tissues and organs are shaped during development is crucial for understanding (and predicting) human birth defects. While ToxCast HTS data may predict developmental toxicity with reasonable accuracy, mechanistic models are still necessary to capture the relevant biology. Subtle microscopic changes induced chemically may amplify to an adverse outcome but coarse changes may override lesion propagation in any complex adaptive system. Modeling system dynamics in a developing tissue is a multiscale problem that challenges our ability to predict toxicity from in vitro profiling data (ToxCast/Tox21). (DISCLAIMER: The views expressed in this presentation are those of the presenter and do not necessarily reflect the views or policies of the US EPA). This was an invited seminar presentation to the National Institute for Public H

  4. Computational Modeling of Lipid Metabolism in Yeast.

    PubMed

    Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda

    2016-01-01

    Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism.

  5. Oxygen and seizure dynamics: II. Computational modeling

    PubMed Central

    Wei, Yina; Ullah, Ghanim; Ingram, Justin

    2014-01-01

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  6. A cognitive model for problem solving in computer science

    NASA Astrophysics Data System (ADS)

    Parham, Jennifer R.

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specific domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the differences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software

  7. A Computational Model of Reasoning from the Clinical Literature

    PubMed Central

    Rennels, Glenn D.

    1986-01-01

    This paper explores the premise that a formalized representation of empirical studies can play a central role in computer-based decision support. The specific motivations underlying this research include the following propositions: 1. Reasoning from experimental evidence contained in the clinical literature is central to the decisions physicians make in patient care. 2. A computational model, based upon a declarative representation for published reports of clinical studies, can drive a computer program that selectively tailors knowledge of the clinical literature as it is applied to a particular case. 3. The development of such a computational model is an important first step toward filling a void in computer-based decision support systems. Furthermore, the model may help us better understand the general principles of reasoning from experimental evidence both in medicine and other domains. Roundsman is a developmental computer system which draws upon structured representations of the clinical literature in order to critique plans for the management of primary breast cancer. Roundsman is able to produce patient-specific analyses of breast cancer management options based on the 24 clinical studies currently encoded in its knowledge base. The Roundsman system is a first step in exploring how the computer can help to bring a critical analysis of the relevant literature to the physician, structured around a particular patient and treatment decision.

  8. Simulation model of load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  9. Computational models for the nonlinear analysis of reinforced concrete plates

    NASA Technical Reports Server (NTRS)

    Hinton, E.; Rahman, H. H. A.; Huq, M. M.

    1980-01-01

    A finite element computational model for the nonlinear analysis of reinforced concrete solid, stiffened and cellular plates is briefly outlined. Typically, Mindlin elements are used to model the plates whereas eccentric Timoshenko elements are adopted to represent the beams. The layering technique, common in the analysis of reinforced concrete flexural systems, is incorporated in the model. The proposed model provides an inexpensive and reasonably accurate approach which can be extended for use with voided plates.

  10. Computer generation of structural models of amorphous Si and Ge

    NASA Astrophysics Data System (ADS)

    Wooten, F.; Winer, K.; Weaire, D.

    1985-04-01

    We have developed and applied a computer algorithm that generates realistic random-network models of a-Si with periodic boundary conditions. These are the first models to have correlation functions that show no serious deiscrepancy with experiment. The algorithm provides a much-needed systematic approach to model construction that can be used to generate models of a large class of amorphous materials.

  11. Computer model of cardiovascular control system responses to exercise

    NASA Technical Reports Server (NTRS)

    Croston, R. C.; Rummel, J. A.; Kay, F. J.

    1973-01-01

    Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.

  12. Computational needs for modelling accelerator components

    SciTech Connect

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs.

  13. Paradox of integration-A computational model

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  14. Computer simulation modeling of abnormal behavior: a program approach.

    PubMed

    Reilly, K D; Freese, M R; Rowe, P B

    1984-07-01

    A need for modeling abnormal behavior on a comprehensive, systematic basis exists. Computer modeling and simulation tools offer especially good opportunities to establish such a program of studies. Issues concern deciding which modeling tools to use, how to relate models to behavioral data, what level of modeling to employ, and how to articulate theory to facilitate such modeling. Four levels or types of modeling, two qualitative and two quantitative, are identified. Their properties are examined and interrelated to include illustrative applications to the study of abnormal behavior, with an emphasis on schizophrenia.

  15. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  16. TsuPy: Computational robustness in Tsunami hazard modelling

    NASA Astrophysics Data System (ADS)

    Schäfer, Andreas M.; Wenzel, Friedemann

    2017-05-01

    Modelling wave propagation is the most essential part in assessing the risk and hazard of tsunami and storm surge events. For the computational assessment of the variability of such events, many simulations are necessary. Even today, most of these simulations are generally run on supercomputers due to the large amount of computations necessary. In this study, a simulation framework, named TsuPy, is introduced to quickly compute tsunami events on a personal computer. It uses the parallelized power of GPUs to accelerate computation. The system is tailored to the application of robust tsunami hazard and risk modelling. It links up to geophysical models to simulate event sources. The system is tested and validated using various benchmarks and real-world case studies. In addition, the robustness criterion is assessed based on a sensitivity study comparing the error impact of various model elements e.g. of topo-bathymetric resolution, knowledge of Manning friction parameters and the knowledge of the tsunami source itself. This sensitivity study is tested on inundation modelling of the 2011 Tohoku tsunami, showing that the major contributor to model uncertainty is in fact the representation of earthquake slip as part of the tsunami source profile. TsuPy provides a fast and reliable tool to quickly assess ocean hazards from tsunamis and thus builds the foundation for a globally uniform hazard and risk assessment for tsunamis.

  17. Optimal allocation of computational resources in hydrogeological models under uncertainty

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.

    2015-09-01

    Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the

  18. Computational modeling of P450s for toxicity prediction.

    PubMed

    Mishra, Nitish Kumar

    2011-10-01

    Drug development is a time-consuming and cost-intensive process. On average, it takes around 12 - 15 years and approximately €800 billion to bring a new drug to the market. Despite introduction of combinatorial chemistry and establishment of high-throughput screening (HTS), the number of new drug entities is limited. In fact, a number of established drug entities have been withdrawn from the market because of drug-drug interactions (DDIs) and adverse drug reactions (ADRs). This review covers the advancements in cytochrome P450 (CYP450) modeling using different computational/machine learning (ML) tools over the past decade. A computational model for identifying non-toxic drug molecule from the pool of small chemical molecules is always welcome in the drug industry. Any computational tool that identifies the toxic molecule at early stage reduces the economic burden by slashing the number of molecules to be screened. This review covers all issues related to CYP-mediated toxicity such as specificity, inhibition, induction and regioselectivity. Several computational methods for CYP-mediated toxicity are available, which are popular in computer-aided drug designing (CADD). These models may become helpful in toxicity prediction during early stages and can reduce high failure rates in preclinical and clinical trials. There is an urgent need to improve the accuracy, interpretability and confidence of the computation models used in drug discovery pathways.

  19. Representation and Computation in Cognitive Models.

    PubMed

    Forbus, Kenneth D; Liang, Chen; Rabkina, Irina

    2017-07-01

    One of the central issues in cognitive science is the nature of human representations. We argue that symbolic representations are essential for capturing human cognitive capabilities. We start by examining some common misconceptions found in discussions of representations and models. Next we examine evidence that symbolic representations are essential for capturing human cognitive capabilities, drawing on the analogy literature. Then we examine fundamental limitations of feature vectors and other distributed representations that, despite their recent successes on various practical problems, suggest that they are insufficient to capture many aspects of human cognition. After that, we describe the implications for cognitive architecture of our view that analogy is central, and we speculate on roles for hybrid approaches. We close with an analogy that might help bridge the gap. Copyright © 2017 Cognitive Science Society, Inc.

  20. The computation of standard solar models

    NASA Technical Reports Server (NTRS)

    Ulrich, Roger K.; Cox, Arthur N.

    1991-01-01

    Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.

  1. Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)

    DTIC Science & Technology

    2016-04-01

    Service: Pay per use / new business models • Exercise Specific: – Set up costs – Ramping up time Plot four routes from my current location to my new...Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing) Amy E. Henninger I N S T I T U T E F O R D E F E N S E A N A L... Implications of Cloud Computing for Modeling and Simulation NDIA SE Division M&S Committee April 19, 2016 NOTES: • This brief was designed for

  2. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  3. A propagation model of computer virus with nonlinear vaccination probability

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi

    2014-01-01

    This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.

  4. Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces

    NASA Technical Reports Server (NTRS)

    Tallman, Aaron; Wang, Yan; Vargas, Mario

    2017-01-01

    To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.

  5. Computational Neuroscience: Modeling the Systems Biology of Synaptic Plasticity

    PubMed Central

    Kotaleski, Jeanette Hellgren; Blackwell, Kim T.

    2016-01-01

    Preface Synaptic plasticity is a mechanism proposed to underlie learning and memory. The complexity of the interactions between ion channels, enzymes, and genes involved in synaptic plasticity impedes a deep understanding of this phenomenon. Computer modeling is an approach to investigate the information processing that is performed by signaling pathways underlying synaptic plasticity. In the past few years, new software developments that blend computational neuroscience techniques with systems biology techniques have allowed large-scale, quantitative modeling of synaptic plasticity in neurons. We highlight significant advancements produced by these modeling efforts and introduce promising approaches that utilize advancements in live cell imaging. PMID:20300102

  6. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  7. Computational science: shifting the focus from tools to models

    PubMed Central

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728

  8. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    2008-04-01

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silicobrain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  9. Computational science: shifting the focus from tools to models.

    PubMed

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed.

  10. Industry-Wide Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir (Compiler)

    1995-01-01

    This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.

  11. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Lee, Hee-Sun

    2015-01-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…

  12. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  13. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  14. Operation of the computer model for microenvironment solar exposure

    NASA Technical Reports Server (NTRS)

    Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.

  15. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  16. Computer Modelling of Biological Molecules: Free Resources on the Internet.

    ERIC Educational Resources Information Center

    Millar, Neil

    1996-01-01

    Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet.…

  17. Bootstrapping the Lexicon: A Computational Model of Infant Speech Segmentation.

    ERIC Educational Resources Information Center

    Batchelder, Eleanor Olds

    2002-01-01

    Details BootLex, a model using distributional cues to build a lexicon and achieving significant segmentation results with English, Japanese, and Spanish; child- and adult-directed speech, and written text; and variations in coding structure. Compares BootLex with three groups of computational models of the infant segmentation process. Discusses…

  18. Comprehensive climate system modeling on massively parallel computers

    SciTech Connect

    Wehner, M.F.; Eltgroth, P.G.; Mirin, A.A.; Duffy, P.B.; Caldeira, K.G.; Bolstad, J.H.; Wang, H.; Matarazzo, C.M.; Creach, U,E.

    1996-10-01

    A better understanding of both natural and human induced changes to the Earth`s climate is necessary for policy makers to make informed decisions regarding energy usage and other greenhouse gas producing activities. To achieve this, substantial increases in the sophistication of climate models are required. Coupling between the climate subsystems of the atmosphere, oceans, cryosphere and biosphere is only now beginning to be explored in global models. the enormous computational expenses of such models is one significant factor limiting progress. A comprehensive climate system model targeted to distributed memory massively parallel processing (MPP) computers is under development at Lawrence Livermore National Laboratory. This class of computers promises the computational power to permit the timely execution of climate models of substantially more sophistication than current generation models. Our strategy for achieving high performance on large numbers of processors is to exploit the multiple layers of parallelism naturally contained within highly coupled global climate models. The centerpiece of this strategy is the concurrent execution of multiple independently parallelized components of the climate system model. This methodology allows the assignment of an arbitrary number of processors to each of the major climate subsystems. Hence, a higher total number of processors may be efficiently used. Furthermore, load imbalances arising from the coupling of submodels may be minimized by adjusting the distribution of processors among the submodels.

  19. Computer Modelling of Biological Molecules: Free Resources on the Internet.

    ERIC Educational Resources Information Center

    Millar, Neil

    1996-01-01

    Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet.…

  20. Computational Modeling of Laser-Cell Biochemical Interactions

    DTIC Science & Technology

    2010-12-31

    The charts (from upper left, and across to lower right) present the modeled response in the RPE cell of Vitamin C (asc/ ascH ), some reactive oxygen...Husinsky, J., Seiser, B., Edthofer, F., Fekete, B., Farmer , L., and Lund, D., “Ex vivo and computer model study on retinal thermal laser-induced damage

  1. Computer Simulation of Small Group Decisions: Model Three.

    ERIC Educational Resources Information Center

    Hare, A.P.; Scheiblechner, Hartmann

    In a test of three computer models to simulate group decisions, data were used from 31 American and Austrian groups on a total of 307 trials. The task for each group was to predict a series of answers of an unknown subject on a value-orientation questionnaire, after being given a sample of his typical responses. The first model, used the mean of…

  2. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Lee, Hee-Sun

    2015-01-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…

  3. Interrogative Model of Inquiry and Computer-Supported Collaborative Learning.

    ERIC Educational Resources Information Center

    Hakkarainen, Kai; Sintonen, Matti

    2002-01-01

    Examines how the Interrogative Model of Inquiry (I-Model), developed for the purposes of epistemology and philosophy of science, could be applied to analyze elementary school students' process of inquiry in computer-supported learning. Suggests that the interrogative approach to inquiry can be productively applied for conceptualizing inquiry in…

  4. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  5. Computational fluid dynamics modeling for emergency preparedness & response

    SciTech Connect

    Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.

    1995-07-01

    Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.

  6. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  7. Analysis of computational modeling techniques for complete rotorcraft configurations

    NASA Astrophysics Data System (ADS)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  8. Optical computing based on neuronal models

    NASA Astrophysics Data System (ADS)

    Farhat, Nabil H.

    1987-10-01

    Ever since the fit between what neural net models can offer (collective, iterative, nonlinear, robust, and fault-tolerant approach to information processing) and the inherent capabilities of optics (parallelism and massive interconnectivity) was first pointed out and the first optical associative memory demonstrated in 1985, work and interest in neuromorphic optical signal processing has been growing steadily. For example, work in optical associative memories is currently being conducted at several academic institutions (e.g., California Institute of Technology, University of Colorado, University of California-San Diego, Stanford University, University of Rochester, and the author's own institution the University of Pennsylvania) and at several industrial and governmental laboratories (e.g., Hughes Research Laboratories - Malibu, the Naval Research Laboratory, and the Jet Propulsion Laboratory). In these efforts, in addition to the vector matrix multiplication with thresholding and feedback scheme utilized in early implementations, an arsenal of sophisticated optical tools such as holographic storage, phase conjugate optics, and wavefront modulation and mixing are being drawn on to realize associative memory functions.

  9. Computational modelling of memory retention from synapse to behaviour

    NASA Astrophysics Data System (ADS)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  10. Identification of Computational and Experimental Reduced-Order Models

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.

    2003-01-01

    The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.

  11. Applying Performance Models to Understand Data-Intensive Computing Efficiency

    DTIC Science & Technology

    2010-05-01

    data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local

  12. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on ;Big Data and Predictive Computational Modeling; that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  13. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  14. Computational fluid dynamics modeling for emergency preparedness and response

    SciTech Connect

    Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.

    1995-02-01

    Computational fluid dynamics (CFD) has (CFD) has played an increasing in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-couple equations whose solution requires a significant level of computational power. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows. We are now embarking on a development program to incorporate prognostic models to generate, in real-time, the meteorological fields for the dispersion models. In contrast to diagnostic models, prognostic models are physically-based and are capable of incorporating many physical processes to treat highly complex flow scenarios.

  15. Computational fluid dynamics modeling of rice husk combustion

    NASA Astrophysics Data System (ADS)

    Le, Kien Anh

    2017-09-01

    The combustion of rice husk fuel in a fixed bed reactor can be assumed very complicated. Researchers have studied this problem for many years. Such studies have been performed by both empirical and computational methods. However, due to the sharp increase in the development of computer science based packages, the Computational Fluid Dynamics (CFD) technique can be applied to simulate and analyse the performance of the combustion reaction. Consequently, this has saved on empirical expenditures and has additionally provided more understanding about the research objective. This paper models the computation of bed fuel combustion in a fixed bed reactor using Fluent version 12.0.16. The User Defined Functions (UDFs) were created to define the system as well as boundary conditions, and initial conditions. Furthermore, the source terms, heat exchanges and homogeneous reactions were also defined in UDFs. The species transport and volume reaction were used to model the gas phase, where the Eulerian model was employed to solve the problem using two phase modelling. The k-ɛsub-model was employed for turbulence, together with an unsteady model, as the problem was regarded as being unstable. The results obtained from the modelling work would give more understanding about the bed fuel combustion in fixed bed reactor.

  16. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  17. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  18. Using GPUs to Meet Next Generation Weather Model Computational Requirements

    NASA Astrophysics Data System (ADS)

    Govett, M.; Hart, L.; Henderson, T.; Middlecoff, J.; Tierney, C.

    2008-12-01

    Weather prediction goals within the Earth Science Research Laboratory at NOAA require significant increases in model resolution (~1 km) and forecast durations (~60 days) to support expected requirements in 5 years or less. However, meeting these goals will likely require at least 100k dedicated cores. Few systems will exist that could even run such a large problem, much less house a facility that could provide the necessary power and cooling requirements. To meet our goals we are exploring alternative technologies, including Graphics Processing Units (GPU), that could provide significantly more computational performance and reduced power and cooling requirements, at a lower cost than traditional high-performance computing solutions. Our current global numerical weather prediction model, the Flow following finite-volume Isocahedral Model (FIM, http://fim.noaa.gov), is still early in its development but is already demonstrating good fidelity and excellent scalability to 1000s of cores. The icosahedral grid has several complexities not present in more traditional Cartesian grids including polygons with different numbers of sides (five and six) and non-trivial computation of locations of neighboring grid cells. FIM uses an indirect addressing scheme that yields very compact code despite these complexities. We have extracted computational kernels that encompass functions likely to take the most time at higher resolutions including all that have horizontal dependencies. Kernels implement equations for computing anti-diffusive flux-corrected transport across cell edges, calculating forcing terms and time-step differencing, and re-computing time-dependent vertical coordinates. We are extending these kernels to explore performance of GPU-specific optimizations. We will present initial performance results from the computational kernels of the FIM model, as well as the challenges related to porting code with indirect memory references to the NVIDIA GPUs. Results of this

  19. The GOURD model of human-computer interaction

    SciTech Connect

    Goldbogen, G.

    1996-12-31

    This paper presents a model, the GOURD model, that can be used to measure the goodness of {open_quotes}interactivity{close_quotes} of an interface design and qualifies how to improve the design. The GOURD model describes what happens to the computer and to the human during a human-computer interaction. Since the interaction is generally repeated, the traversal of the model repeatedly is similar to a loop programming structure. Because the model measures interaction over part or all of the application, it can also be used as a classifier of the part or the whole application. But primarily, the model is used as a design guide and a predictor of effectiveness.

  20. Biological networks 101: computational modeling for molecular biologists.

    PubMed

    Scholma, Jetse; Schivo, Stefano; Urquidi Camacho, Ricardo A; van de Pol, Jaco; Karperien, Marcel; Post, Janine N

    2014-01-01

    Computational modeling of biological networks permits the comprehensive analysis of cells and tissues to define molecular phenotypes and novel hypotheses. Although a large number of software tools have been developed, the versatility of these tools is limited by mathematical complexities that prevent their broad adoption and effective use by molecular biologists. This study clarifies the basic aspects of molecular modeling, how to convert data into useful input, as well as the number of time points and molecular parameters that should be considered for molecular regulatory models with both explanatory and predictive potential. We illustrate the necessary experimental preconditions for converting data into a computational model of network dynamics. This model requires neither a thorough background in mathematics nor precise data on intracellular concentrations, binding affinities or reaction kinetics. Finally, we show how an interactive model of crosstalk between signal transduction pathways in primary human articular chondrocytes allows insight into processes that regulate gene expression.