Jean-Noel Leboeuf
2004-03-04
OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D.
Pediatric Computational Models
NASA Astrophysics Data System (ADS)
Soni, Bharat K.; Kim, Jong-Eun; Ito, Yasushi; Wagner, Christina D.; Yang, King-Hay
A computational model is a computer program that attempts to simulate a behavior of a complex system by solving mathematical equations associated with principles and laws of physics. Computational models can be used to predict the body's response to injury-producing conditions that cannot be simulated experimentally or measured in surrogate/animal experiments. Computational modeling also provides means by which valid experimental animal and cadaveric data can be extrapolated to a living person. Widely used computational models for injury biomechanics include multibody dynamics and finite element (FE) models. Both multibody and FE methods have been used extensively to study adult impact biomechanics in the past couple of decades.
Computational Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)
1995-01-01
This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.
NASA Technical Reports Server (NTRS)
2000-01-01
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Computer Modeling and Simulation
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
NASA Astrophysics Data System (ADS)
Grandi, C.; Bonacorsi, D.; Colling, D.; Fisk, I.; Girone, M.
2014-06-01
The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.
Computationally modeling interpersonal trust
Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David
2013-01-01
We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649
Computational models of planning.
Geffner, Hector
2013-07-01
The selection of the action to do next is one of the central problems faced by autonomous agents. Natural and artificial systems address this problem in various ways: action responses can be hardwired, they can be learned, or they can be computed from a model of the situation, the actions, and the goals. Planning is the model-based approach to action selection and a fundamental ingredient of intelligent behavior in both humans and machines. Planning, however, is computationally hard as the consideration of all possible courses of action is not computationally feasible. The problem has been addressed by research in Artificial Intelligence that in recent years has uncovered simple but powerful computational principles that make planning feasible. The principles take the form of domain-independent methods for computing heuristics or appraisals that enable the effective generation of goal-directed behavior even over huge spaces. In this paper, we look at several planning models, at methods that have been shown to scale up to large problems, and at what these methods may suggest about the human mind. WIREs Cogn Sci 2013, 4:341-356. doi: 10.1002/wcs.1233 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304223
Chung, S.; McGill, M.; Preece, D.S.
1994-12-31
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the advance in computer technology has increased the computing power of small work stations as well as PC (personal computers) to permit a much shorter turn-around time for complex computations. The DMC can perform a blast simulation in 0.5 hours on the SUN SPARC station 10-41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
Computer Modeling Of Atomization
NASA Technical Reports Server (NTRS)
Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.
1994-01-01
Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.
Computer modelling of minerals
NASA Astrophysics Data System (ADS)
Catlow, C. R. A.; Parker, S. C.
We review briefly the methodology and achievements of computer simulation techniques in modelling structural and defect properties of inorganic solids. Special attention is paid to the role of interatomic potentials in such studies. We discuss the extension of the techniques to the modelling of minerals, and describe recent results on the study of structural properties of silicates. In a paper of this length, it is not possible to give a comprehensive survey of this field. We shall concentrate on the recent work of our own group. The reader should consult Tossell (1977), Gibbs (1982), and Busing (1970) for examples of other computational studies of inorganic solids. The techniques we discuss are all based on the principle of energy minimization. Simpler, "bridge-buildingrdquo procedures, based on known bond-lengths, of which distance least squares (DLS) techniques are the best known are discussed, for example, in Dempsey and Strens (1974).
Understanding student computational thinking with computational modeling
NASA Astrophysics Data System (ADS)
Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.
2013-01-01
Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.
NASA Technical Reports Server (NTRS)
Green, Terry J.
1988-01-01
A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2010-01-01
A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.
Computer modeling of photodegradation
NASA Technical Reports Server (NTRS)
Guillet, J.
1986-01-01
A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.
NASA Astrophysics Data System (ADS)
Kopper, Claudio; Antares Collaboration
2013-10-01
Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.
Computational modelling of atherosclerosis.
Parton, Andrew; McGilligan, Victoria; O'Kane, Maurice; Baldrick, Francina R; Watterson, Steven
2016-07-01
Atherosclerosis is one of the principle pathologies of cardiovascular disease with blood cholesterol a significant risk factor. The World Health Organization estimates that approximately 2.5 million deaths occur annually because of the risk from elevated cholesterol, with 39% of adults worldwide at future risk. Atherosclerosis emerges from the combination of many dynamical factors, including haemodynamics, endothelial damage, innate immunity and sterol biochemistry. Despite its significance to public health, the dynamics that drive atherosclerosis remain poorly understood. As a disease that depends on multiple factors operating on different length scales, the natural framework to apply to atherosclerosis is mathematical and computational modelling. A computational model provides an integrated description of the disease and serves as an in silico experimental system from which we can learn about the disease and develop therapeutic hypotheses. Although the work completed in this area to date has been limited, there are clear signs that interest is growing and that a nascent field is establishing itself. This article discusses the current state of modelling in this area, bringing together many recent results for the first time. We review the work that has been done, discuss its scope and highlight the gaps in our understanding that could yield future opportunities. PMID:26438419
Chung, S.; McGill, M.; Preece, D.S.
1994-07-01
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
Computational modelling of polymers
NASA Technical Reports Server (NTRS)
Celarier, Edward A.
1991-01-01
Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.
Workshop on Computational Turbulence Modeling
Not Available
1993-01-01
This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow. Separate abstracts have been prepared for articles from this report.
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
1993-01-01
This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.
Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Computer-Aided Geometry Modeling
NASA Technical Reports Server (NTRS)
Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)
1984-01-01
Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
NASA Technical Reports Server (NTRS)
Patel, Umesh D.; DellaTorre, Edward; Day, John H. (Technical Monitor)
2001-01-01
A fast differential equation approach for the DOK model has been extented to the CMH model. Also, a cobweb technique for calculating the CMH model is also presented. The two techniques are contrasted from the point of view of flexibility and computation time.
NASA Technical Reports Server (NTRS)
Franz, J. R.
1993-01-01
Numerical calculations of the electronic properties of liquid II-VI semiconductors, particularly CdTe and ZnTe were performed. The measured conductivity of these liquid alloys were modeled by assuming that the dominant temperature effect is the increase in the number of dangling bonds with increasing temperature. For low to moderate values of electron correlation, the calculated conductivity as a function of dangling bond concentration closely follows the measured conductivity as a function of temperature. Both the temperature dependence of the chemical potential and the thermal smearing in region of the Fermi surface have a large effect on calculated values of conductivity.
Computational models of epileptiform activity.
Wendling, Fabrice; Benquet, Pascal; Bartolomei, Fabrice; Jirsa, Viktor
2016-02-15
We reviewed computer models that have been developed to reproduce and explain epileptiform activity. Unlike other already-published reviews on computer models of epilepsy, the proposed overview starts from the various types of epileptiform activity encountered during both interictal and ictal periods. Computational models proposed so far in the context of partial and generalized epilepsies are classified according to the following taxonomy: neural mass, neural field, detailed network and formal mathematical models. Insights gained about interictal epileptic spikes and high-frequency oscillations, about fast oscillations at seizure onset, about seizure initiation and propagation, about spike-wave discharges and about status epilepticus are described. This review shows the richness and complementarity of the various modeling approaches as well as the fruitful contribution of the computational neuroscience community in the field of epilepsy research. It shows that models have progressively gained acceptance and are now considered as an efficient way of integrating structural, functional and pathophysiological data about neural systems into "coherent and interpretable views". The advantages, limitations and future of modeling approaches are discussed. Perspectives in epilepsy research and clinical epileptology indicate that very promising directions are foreseen, like model-guided experiments or model-guided therapeutic strategy, among others. PMID:25843066
Computational modeling of properties
NASA Technical Reports Server (NTRS)
Franz, Judy R.
1994-01-01
A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.
Computational modeling of properties
NASA Technical Reports Server (NTRS)
Franz, Judy R.
1994-01-01
A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wide-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid 2-6 semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Ch. 33 Modeling: Computational Thermodynamics
Besmann, Theodore M
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Computational Modeling of Multiphase Reactors.
Joshi, J B; Nandakumar, K
2015-01-01
Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling. PMID:26134737
Computational models of adult neurogenesis
NASA Astrophysics Data System (ADS)
Cecchi, Guillermo A.; Magnasco, Marcelo O.
2005-10-01
Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.
Models for computing combat risk
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2002-07-01
Combat always involves uncertainty and uncertainty entails risk. To ensure that a combat task is prosecuted with the desired probability of success, the task commander has to devise an appropriate task force and then adjust it continuously in the course of battle. In order to do so, he has to evaluate how the probability of task success is related to the structure, capabilities and numerical strengths of combatants. For this purpose, predictive models of combat dynamics for combats in which the combatants fire asynchronously at random instants are developed from the first principles. Combats involving forces with both unlimited and limited ammunition supply are studied and modeled by stochastic Markov processes. In addition to the Markov models, another class of models first proposed by Brown was explored. The models compute directly the probability of win, in which we are primarily interested, without integrating the state probability equations. Experiments confirm that they produce exactly the same results at much lower computational cost.
Computational Modeling Method for Superalloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John
1997-01-01
Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.
Climate Modeling Computing Needs Assessment
NASA Astrophysics Data System (ADS)
Petraska, K. E.; McCabe, J. D.
2011-12-01
This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.
Hydronic distribution system computer model
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Computational Modeling for Bedside Application
Kerckhoffs, Roy C.P.; Narayan, Sanjiv M.; Omens, Jeffrey H.; Mulligan, Lawrence J.; McCulloch, Andrew D.
2008-01-01
With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988
Visualizing ultrasound through computational modeling
NASA Technical Reports Server (NTRS)
Guo, Theresa W.
2004-01-01
The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.
Parallel computing in enterprise modeling.
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)
1994-01-01
The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.
MODEL IDENTIFICATION AND COMPUTER ALGEBRA.
Bollen, Kenneth A; Bauldry, Shawn
2010-10-01
Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods. PMID:21769158
Los Alamos Center for Computer Security formal computer security model
Dreicer, J.S.; Hunteman, W.J.; Markin, J.T.
1989-01-01
This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The need to test and verify DOE computer security policy implementation first motivated this effort. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present formal mathematical models for computer security. The fundamental objective of computer security is to prevent the unauthorized and unaccountable access to a system. The inherent vulnerabilities of computer systems result in various threats from unauthorized access. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The model is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell and LaPadula abstract sets of objects and subjects. 6 refs.
Computer modeling of piezoresistive gauges
Nutt, G. L.; Hallquist, J. O.
1981-08-07
A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medum subjected to a 0.5 Gp plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 2.7 Gp shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shocked medium.
A Computationally Efficient Bedrock Model
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2002-05-01
Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a
A Model Computer Literacy Course.
ERIC Educational Resources Information Center
Orndorff, Joseph
Designed to address the varied computer skill levels of college students, this proposed computer literacy course would be modular in format, with modules tailored to address various levels of expertise and permit individualized instruction. An introductory module would present both the history and future of computers and computing, followed by an…
A Computational Theory of Modelling
NASA Astrophysics Data System (ADS)
Rossberg, Axel G.
2003-04-01
A metatheory is developed which characterizes the relationship between a modelled system, which complies with some ``basic theory'', and a model, which does not, and yet reproduces important aspects of the modelled system. A model is represented by an (in a certain sense, s.b.) optimal algorithm which generates data that describe the model's state or evolution complying with a ``reduced theory''. Theories are represented by classes of (in a similar sense, s.b.) optimal algorithms that test if their input data comply with the theory. The metatheory does not prescribe the formalisms (data structure, language) to be used for the description of states or evolutions. Transitions to other formalisms and loss of accuracy, common to theory reduction, are explicitly accounted for. The basic assumption of the theory is that resources such as the code length (~ programming time) and the computation time for modelling and testing are costly, but the relative cost of each recourse is unknown. Thus, if there is an algorithm a for which there is no other algorithm b solving the same problem but using less of each recourse, then a is considered optimal. For tests (theories), the set X of wrongly admitted inputs is treated as another resource. It is assumed that X1 is cheaper than X2 when X1 ⊂ X2 (X1 ≠ X2). Depending on the problem, the algorithmic complexity of a reduced theory can be smaller or larger than that of the basic theory. The theory might help to distinguish actual properties of complex systems from mere mental constructs. An application to complex spatio-temporal patterns is discussed.
Computational model for chromosomal instabilty
NASA Astrophysics Data System (ADS)
Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina
2015-03-01
Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.
Computational modeling of membrane proteins
Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.
2014-01-01
The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688
Cupola Furnace Computer Process Model
Seymour Katz
2004-12-31
The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).
Predictive models and computational toxicology.
Knudsen, Thomas; Martin, Matthew; Chandler, Kelly; Kleinstreuer, Nicole; Judson, Richard; Sipes, Nisha
2013-01-01
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was launched by EPA in 2007 and is part of the federal Tox21 consortium to develop a cost-effective approach for efficiently prioritizing the toxicity testing of thousands of chemicals and the application of this information to assessing human toxicology. ToxCast addresses this problem through an integrated workflow using high-throughput screening (HTS) of chemical libraries across more than 650 in vitro assays including biochemical assays, human cells and cell lines, and alternative models such as mouse embryonic stem cells and zebrafish embryo development. The initial phase of ToxCast profiled a library of 309 environmental chemicals, mostly pesticidal actives having rich in vivo data from guideline studies that include chronic/cancer bioassays in mice and rats, multigenerational reproductive studies in rats, and prenatal developmental toxicity endpoints in rats and rabbits. The first phase of ToxCast was used to build models that aim to determine how well in vivo animal effects can be predicted solely from the in vitro data. Phase I is now complete and both the in vitro data (ToxCast) and anchoring in vivo database (ToxRefDB) have been made available to the public (http://actor.epa.gov/). As Phase II of ToxCast is now underway, the purpose of this chapter is to review progress to date with ToxCast predictive modeling, using specific examples on developmental and reproductive effects in rats and rabbits with lessons learned during Phase I. PMID:23138916
Disciplines, models, and computers: the path to computational quantum chemistry.
Lenhard, Johannes
2014-12-01
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990. PMID:25571750
The Fermilab Central Computing Facility architectural model
Nicholls, J.
1989-05-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.
Fast Computation of the Inverse CMH Model
NASA Technical Reports Server (NTRS)
Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)
2001-01-01
A fast computational method based on differential equation approach for inverse DOK model has been extended for the inverse CMH model. Also, a cobweb technique for calculating the inverse CMH model is also presented. The two techniques are differed from the point of view of flexibility and computation time.
Predictive Models and Computational Toxicology
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Computational modeling of vascular anastomoses.
Migliavacca, Francesco; Dubini, Gabriele
2005-06-01
Recent development of computational technology allows a level of knowledge of biomechanical factors in the healthy or pathological cardiovascular system that was unthinkable a few years ago. In particular, computational fluid dynamics (CFD) and computational structural (CS) analyses have been used to evaluate specific quantities, such as fluid and wall stresses and strains, which are very difficult to measure in vivo. Indeed, CFD and CS offer much more variability and resolution than in vitro and in vivo methods, yet computations must be validated by careful comparison with experimental and clinical data. The enormous parallel development of clinical imaging such as magnetic resonance or computed tomography opens a new way toward a detailed patient-specific description of the actual hemodynamics and structural behavior of living tissues. Coupling of CFD/CS and clinical images is becoming a standard evaluation that is expected to become part of the clinical practice in the diagnosis and in the surgical planning in advanced medical centers. This review focuses on computational studies of fluid and structural dynamics of a number of vascular anastomoses: the coronary bypass graft anastomoses, the arterial peripheral anastomoses, the arterio-venous graft anastomoses and the vascular anastomoses performed in the correction of congenital heart diseases. PMID:15772842
Ellis, D; Allaire, J C
1999-09-01
We proposed a mediation model to examine the effects of age, education, computer knowledge, and computer anxiety on computer interest in older adults. We hypothesized that computer knowledge and computer anxiety would fully mediate the effects of age and education on computer interest. A sample of 330 older adults from local senior-citizen apartment buildings completed a survey that included an assessment of the constructs included in the model. Using structural equation modeling, we found that the results supported the hypothesized mediation model. In particular, the effect of computer knowledge operated on computer interest through computer anxiety. The effect of age was not fully mitigated by the other model variables, indicating the need for future research that identifies and models other correlates of age and computer interest. The most immediate application of this research is the finding that a simple 3-item instrument can be used to assess computer interest in older populations. This will help professionals plan and implement computer services in public-access settings for older adults. An additional application of this research is the information it provides for training program designers. PMID:10665203
Climate Modeling using High-Performance Computing
Mirin, A A
2007-02-05
The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
COLD-SAT Dynamic Model Computer Code
NASA Technical Reports Server (NTRS)
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Applications of computer modeling to fusion research
Dawson, J.M.
1989-01-01
Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.
Leverage points in a computer model
NASA Astrophysics Data System (ADS)
Janošek, Michal
2016-06-01
This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.
Model Railroading and Computer Fundamentals
ERIC Educational Resources Information Center
McCormick, John W.
2007-01-01
Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-01-01
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications. PMID:26062616
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
Ranked retrieval of Computational Biology models
2010-01-01
Background The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Results Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. Conclusions The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models. PMID:20701772
Computational Medicine: Translating Models to Clinical Care
Winslow, Raimond L.; Trayanova, Natalia; Geman, Donald; Miller, Michael I.
2013-01-01
Because of the inherent complexity of coupled nonlinear biological systems, the development of computational models is necessary for achieving a quantitative understanding of their structure and function in health and disease. Statistical learning is applied to high-dimensional biomolecular data to create models that describe relationships between molecules and networks. Multiscale modeling links networks to cells, organs, and organ systems. Computational approaches are used to characterize anatomic shape and its variations in health and disease. In each case, the purposes of modeling are to capture all that we know about disease and to develop improved therapies tailored to the needs of individuals. We discuss advances in computational medicine, with specific examples in the fields of cancer, diabetes, cardiology, and neurology. Advances in translating these computational methods to the clinic are described, as well as challenges in applying models for improving patient health. PMID:23115356
Comprehensive computational model for thermal plasma processing
NASA Astrophysics Data System (ADS)
Chang, C. H.
A new numerical model is described for simulating thermal plasmas containing entrained particles, with emphasis on plasma spraying applications. The plasma is represented as a continuum multicomponent chemically reacting ideal gas, while the particles are tracked as discrete Lagrangian entities coupled to the plasma. The overall computational model is embodied in a new computer code called LAVA. Computational results are presented from a transient simulation of alumina spraying in a turbulent argon-helium plasma jet in air environment, including torch geometry, substrate, and multiple species with chemical reactions. Plasma-particle interactions including turbulent dispersion have been modeled in a fully self-consistent manner.
Modeling communication in cluster computing
Stoica, I.; Sultan, F.; Keyes, D.
1995-12-01
We introduce a model for communication costs in parallel processing environments, called the {open_quotes}hyperbolic model,{close_quotes} that generalizes two-parameter dedicated-link models in an analytically simple way. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes and internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. A CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. Rules are given for reducing a communication graph consisting of many CBs to an equivalent two-parameter form, while maintaining a good approximation for the service time. In [4] we demonstrate a tight fit of the estimates of the cost of communication based on our model with actual measurements of the communication and synchronization time between end processes. We, also show the compatibility of our model (to within a factor of 3/4) with the recently proposed LogP model.
Teaching Environmental Systems Modelling Using Computer Simulation.
ERIC Educational Resources Information Center
Moffatt, Ian
1986-01-01
A computer modeling course in environmental systems and dynamics is presented. The course teaches senior undergraduates to analyze a system of interest, construct a system flow chart, and write computer programs to simulate real world environmental processes. An example is presented along with a course evaluation, figures, tables, and references.…
Computer modeling of human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Enhanced absorption cycle computer model
NASA Astrophysics Data System (ADS)
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
A new epidemic model of computer viruses
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Yang, Xiaofan
2014-06-01
This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.
Computer Model Buildings Contaminated with Radioactive Material
1998-05-19
The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.
Applications of Computational Modeling in Cardiac Surgery
Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.
2014-01-01
Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036
Computational modeling and multilevel cancer control interventions.
Morrissey, Joseph P; Lich, Kristen Hassmiller; Price, Rebecca Anhang; Mandelblatt, Jeanne
2012-05-01
This chapter presents an overview of computational modeling as a tool for multilevel cancer care and intervention research. Model-based analyses have been conducted at various "beneath the skin" or biological scales as well as at various "above the skin" or socioecological levels of cancer care delivery. We review the basic elements of computational modeling and illustrate its applications in four cancer control intervention areas: tobacco use, colorectal cancer screening, cervical cancer screening, and racial disparities in access to breast cancer care. Most of these models have examined cancer processes and outcomes at only one or two levels. We suggest ways these models can be expanded to consider interactions involving three or more levels. Looking forward, a number of methodological, structural, and communication barriers must be overcome to create useful computational models of multilevel cancer interventions and population health. PMID:22623597
Economic Analysis. Computer Simulation Models.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…
Computational study of lattice models
NASA Astrophysics Data System (ADS)
Zujev, Aleksander
This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.
Computational Methods to Model Persistence.
Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert
2016-01-01
Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111
Parallel computing in atmospheric chemistry models
Rotman, D.
1996-02-01
Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.
A Computational Framework for Realistic Retina Modeling.
Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco
2016-11-01
Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas. PMID:27354192
Computer Modeling of Direct Metal Laser Sintering
NASA Technical Reports Server (NTRS)
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
A computational model of the cerebellum
Travis, B.J.
1990-01-01
The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.
Mechanistic models in computational social science
NASA Astrophysics Data System (ADS)
Holme, Petter; Liljeros, Fredrik
2015-09-01
Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.
Teaching Forest Planning with Computer Models
ERIC Educational Resources Information Center
Howard, Richard A.; Magid, David
1977-01-01
This paper describes a series of FORTRAN IV computer models which are the content-oriented subject matter for a college course in forest planning. The course objectives, the planning problem, and the ten planning aid models are listed. Student comments and evaluation of the program are included. (BT)
Parallel computing for automated model calibration
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.; Vail, Lance W.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. So far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.
Computer Modeling and Visualization in Design Technology: An Instructional Model.
ERIC Educational Resources Information Center
Guidera, Stan
2002-01-01
Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
Concepts to accelerate water balance model computation
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Casper, Markus; Gemmar, Peter
2010-05-01
Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be
CDF computing and event data models
Snider, F.D.; /Fermilab
2005-12-01
The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future. PMID:17835348
Computational modelling of schizophrenic symptoms: basic issues.
Tretter, F; an der Heiden, U; Rujescu, D; Pogarell, O
2012-05-01
Emerging "(computational) systems medicine" challenges neuropsychiatry regarding the development of heuristic computational brain models which help to explore symptoms and syndromes of mental disorders. This methodology of exploratory modelling of mental functions and processes and of their pathology requires a clear and operational definition of the target variable (explanandum). In the case of schizophrenia, a complex and heterogeneous disorder, single psychopathological key symptoms such as working memory deficiency, hallucination or delusion need to be defined first. Thereafter, measures of brain structures can be used in a multilevel view as biological correlates of these symptoms. Then, in order to formally "explain" the symptoms, a qualitative model can be constructed. In another step, numerical values have to be integrated into the model and exploratory computer simulations can be performed. Normal and pathological functioning is to be tested in computer experiments allowing the formulation of new hypotheses and questions for empirical research. However, the crucial challenge is to point out the appropriate degree of complexity (or simplicity) of these models, which is required in order to achieve an epistemic value that might lead to new hypothetical explanatory models and could stimulate new empirical and theoretical research. Some outlines of these methodological issues are discussed here, regarding the fact that measurements alone are not sufficient to build models. PMID:22565230
Do's and Don'ts of Computer Models for Planning
ERIC Educational Resources Information Center
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
Solving stochastic epidemiological models using computer algebra
NASA Astrophysics Data System (ADS)
Hincapie, Doracelly; Ospina, Juan
2011-06-01
Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Computational modeling of laser-tissue interaction
London, R.A.; Amendt, P.; Bailey, D.S.; Eder, D.C.; Maitland, D.J.; Glinsky, M.E.; Strauss, M.; Zimmerman, G.B.
1996-05-01
Computational modeling can play an important role both in designing laser-tissue interaction experiments and in understanding the underlying mechanisms. This can lead to more rapid and less expensive development if new procedures and instruments, and a better understanding of their operation. We have recently directed computer programs and associated expertise developed over many years to model high intensity laser-matter interactions for fusion research towards laser-tissue interaction problem. A program called LATIS is being developed to specifically treat laser-tissue interaction phenomena, such as highly scattering light transport, thermal coagulation, and hydrodynamic motion.
Computational algebraic geometry of epidemic models
NASA Astrophysics Data System (ADS)
Rodríguez Vega, Martín.
2014-06-01
Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.
Computer modeling of commercial refrigerated warehouse facilities
Nicoulin, C.V.; Jacobs, P.C.; Tory, S.
1997-07-01
The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented.
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Computational Spectrum of Agent Model Simulation
Perumalla, Kalyan S
2010-01-01
The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.
Global detailed geoid computation and model analysis
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Vincent, S.
1974-01-01
Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology. PMID:25790483
Computational modeling of dilute biomass slurries
NASA Astrophysics Data System (ADS)
Sprague, Michael; Stickel, Jonathan; Fischer, Paul; Lischeske, James
2012-11-01
The biochemical conversion of lignocellulosic biomass to liquid transportation fuels involves a multitude of physical and chemical transformations that occur in several distinct processing steps (e.g., pretreatment, enzymatic hydrolysis, and fermentation). In this work we focus on development of a computational fluid dynamics model of a dilute biomass slurry, which is a highly viscous particle-laden fluid that can exhibit yield-stress behavior. Here, we model the biomass slurry as a generalized Newtonian fluid that accommodates biomass transport due to settling and biomass-concentration-dependent viscosity. Within a typical mixing vessel, viscosity can vary over several orders of magnitude. We solve the model with the Nek5000 spectral-finite-element solver in a simple vane mixer, and validate against experimental results. This work is directed towards our goal of a fully coupled computational model of fluid dynamics and reaction kinetics for the enzymatic hydrolysis of lignocellulosic biomass.
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
Computer Modelling of Photochemical Smog Formation
ERIC Educational Resources Information Center
Huebert, Barry J.
1974-01-01
Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Images as a basis for computer modelling
NASA Astrophysics Data System (ADS)
Beaufils, D.; LeTouzé, J.-C.; Blondel, F.-M.
1994-03-01
New computer technologies such as the graphics data tablet, video digitization and numerical methods, can be used for measurement and mathematical modelling in physics. Two programs dealing with newtonian mechanics and some of related scientific activities for A-level students are described.
Informing Mechanistic Toxicology with Computational Molecular Models
Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Automating sensitivity analysis of computer models using computer calculus
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.
Computational Modeling: From Remote Sensing to Quantum Computing
NASA Astrophysics Data System (ADS)
Healy, Dennis
2001-03-01
Recent DARPA investments have contributed to significant advances in numerically sound and computationally efficient physics-based modeling, enabling a wide variety of applications of critical interest to the DoD and Industry. Specific examples may be found in a wide variety of applications ranging from the design and operation of advanced synthetic aperture radar systems to the virtual integrated prototyping of reactors and control loops for the manufacture of thin-film functional material systems. This talk will survey the development and application of well-conditioned fast operators for particular physical problems and their critical contributions to various real world problems. We'll conclude with an indication of how these methods may contribute to exploring the revolutionary potential of quantum information theory.
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
Computer Model Predicts the Movement of Dust
NASA Technical Reports Server (NTRS)
2002-01-01
A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center
Computational models of natural language processing
Bara, B.G.; Guida, G.
1984-01-01
The main concern in this work is the illustration of models for natural language processing, and the discussion of their role in the development of computational studies of language. Topics covered include the following: competence and performance in the design of natural language systems; planning and understanding speech acts by interpersonal games; a framework for integrating syntax and semantics; knowledge representation and natural language: extending the expressive power of proposition nodes; viewing parsing as word sense discrimination: a connectionist approach; a propositional language for text representation; from topic and focus of a sentence to linking in a text; language generation by computer; understanding the Chinese language; semantic primitives or meaning postulates: mental models of propositional representations; narrative complexity based on summarization algorithms; using focus to constrain language generation; and towards an integral model of language competence.
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
Concepts to accelerate water balance model computation
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Casper, Markus; Gemmar, Peter
2010-05-01
Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be
Computer modeling of heat pipe performance
NASA Technical Reports Server (NTRS)
Peterson, G. P.
1983-01-01
A parametric study of the defining equations which govern the steady state operational characteristics of the Grumman monogroove dual passage heat pipe is presented. These defining equations are combined to develop a mathematical model which describes and predicts the operational and performance capabilities of a specific heat pipe given the necessary physical characteristics and working fluid. Included is a brief review of the current literature, a discussion of the governing equations, and a description of both the mathematical and computer model. Final results of preliminary test runs of the model are presented and compared with experimental tests on actual prototypes.
Molecular Sieve Bench Testing and Computer Modeling
NASA Technical Reports Server (NTRS)
Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.
1995-01-01
The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.
GPU Computing in Space Weather Modeling
NASA Astrophysics Data System (ADS)
Feng, X.; Zhong, D.; Xiang, C.; Zhang, Y.
2013-04-01
Space weather refers to conditions on the Sun and in the solar wind, magnetosphere, ionosphere, and thermosphere that can influence the performance and reliability of space-borne and ground-based technological systems and that affect human life or health. In order to make the real- or faster than real-time numerical prediction of adverse space weather events and their influence on the geospace environment, high-performance computational models are required. The main objective in this article is to explore the application of programmable graphic processing units (GPUs) to the numerical space weather modeling for the study of solar wind background that is a crucial part in the numerical space weather modeling. GPU programming is realized for our Solar-Interplanetary-CESE MHD model (SIP-CESE MHD model) by numerically studying the solar corona/interplanetary solar wind. The global solar wind structures is obtained by the established GPU model with the magnetic field synoptic data as input. The simulated global structures for Carrington rotation 2060 agrees well with solar observations and solar wind measurements from spacecraft near the Earth. The model's implementation of the adaptive-mesh-refinement (AMR) and message passing interface (MPI) enables the full exploitation of the computing power in a heterogeneous CPU/GPU cluster and significantly improves the overall performance. Our initial tests with available hardware show speedups of roughly 5x compared to traditional software implementation. This work presents a novel application of GPU to the space weather study.
Computational Modeling of Pollution Transmission in Rivers
NASA Astrophysics Data System (ADS)
Parsaie, Abbas; Haghiabi, Amir Hamzeh
2015-08-01
Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.
Validation of Computational Models in Biomechanics
Henninger, Heath B.; Reese, Shawn P.; Anderson, Andrew E.; Weiss, Jeffrey A.
2010-01-01
The topics of verification and validation (V&V) have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. V&V are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed V&V as they pertain to traditional solid and fluid mechanics, it is the intent of this manuscript to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the V&V process in an effort to increase peer acceptance of computational biomechanics models. PMID:20839648
Modelling amorphous computations with transcription networks
Simpson, Zack Booth; Tsai, Timothy L.; Nguyen, Nam; Chen, Xi; Ellington, Andrew D.
2009-01-01
The power of electronic computation is due in part to the development of modular gate structures that can be coupled to carry out sophisticated logical operations and whose performance can be readily modelled. However, the equivalences between electronic and biochemical operations are far from obvious. In order to help cross between these disciplines, we develop an analogy between complementary metal oxide semiconductor and transcriptional logic gates. We surmise that these transcriptional logic gates might prove to be useful in amorphous computations and model the abilities of immobilized gates to form patterns. Finally, to begin to implement these computations, we design unique hairpin transcriptional gates and then characterize these gates in a binary latch similar to that already demonstrated by Kim et al. (Kim, White & Winfree 2006 Mol. Syst. Biol. 2, 68 (doi:10.1038/msb4100099)). The hairpin transcriptional gates are uniquely suited to the design of a complementary NAND gate that can serve as an underlying basis of molecular computing that can output matter rather than electronic information. PMID:19474083
Computational Modeling of Mammalian Signaling Networks
Hughey, Jacob J; Lee, Timothy K; Covert, Markus W
2011-01-01
One of the most exciting developments in signal transduction research has been the proliferation of studies in which a biological discovery was initiated by computational modeling. Here we review the major efforts that enable such studies. First, we describe the experimental technologies that are generally used to identify the molecular components and interactions in, and dynamic behavior exhibited by, a network of interest. Next, we review the mathematical approaches that are used to model signaling network behavior. Finally, we focus on three specific instances of “model-driven discovery”: cases in which computational modeling of a signaling network has led to new insights which have been verified experimentally. Signal transduction networks are the bridge between the extraordinarily complex extracellular environment and a carefully orchestrated cellular response. These networks are largely composed of proteins which can interact, move to specific cellular locations, or be modified or degraded. The integration of these events often leads to the activation or inactivation of transcription factors, which then induce or repress the expression of thousands of genes. Because of this critical role in translating environmental cues to cellular behaviors, malfunctioning signaling networks can lead to a variety of pathologies. One example is cancer, in which many of the key genes found to be involved in cancer onset and development are components of signaling pathways [1, 2]. A detailed understanding of the cellular signaling networks underlying such diseases would likely be extremely useful in developing new treatments. However, the complexity of signaling networks is such that their integrated functions cannot be determined without computational simulation. In recent years, mathematical modeling of signal transduction has led to some exciting new findings and biological discoveries. Here, we review the work that has enabled computational modeling of mammalian
Multiscale Computational Models of Complex Biological Systems
Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.
2014-01-01
Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1993-01-01
Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.
Computer modeling of a compact isochronous cyclotron
NASA Astrophysics Data System (ADS)
Smirnov, V. L.
2015-11-01
The computer modeling methods of a compact isochronous cyclotron are described. The main stages of analysis of accelerator facilities systems are considered. The described methods are based on theoretical fundamentals of cyclotron physics and mention highlights of creation of the physical project of a compact cyclotron. The main attention is paid to the analysis of the beam dynamics, formation of a magnetic field, stability of the movement, and a realistic assessment of intensity of the generated bunch of particles. In the article, the stages of development of the accelerator computer model, analytical ways of assessment of the accelerator parameters, and the basic technique of the numerical analysis of dynamics of the particles are described.
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
2012-09-04
A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
2014-06-02
A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Computed structures of polyimides model compounds
NASA Technical Reports Server (NTRS)
Tai, H.; Phillips, D. H.
1990-01-01
Using a semi-empirical approach, a computer study was made of 8 model compounds of polyimides. The compounds represent subunits from which NASA Langley Research Center has successfully synthesized polymers for aerospace high performance material application, including one of the most promising, LARC-TPI polymer. Three-dimensional graphic display as well as important molecular structure data pertaining to these 8 compounds are obtained.
Computational models of human vision with applications
NASA Technical Reports Server (NTRS)
Wandell, B. A.
1985-01-01
Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.
COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS
Ibrahim, Essam A
2013-01-09
Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.
Mathematical and computational models of plasma flows
NASA Astrophysics Data System (ADS)
Brushlinsky, K. V.
Investigations of plasma flows are of interest, firstly, due to numerous applications, and secondly, because of their general principles, which form a special branch of physics: the plasma dynamics. Numerical simulation and computation, together with theoretic and experimental methods, play an important part in these investigations. Speaking on flows, a relatively dense plasma is mentioned, so its mathematical models appertain to the fluid mechanics, i.e., they are based on the magnetohydrodynamic description of plasma. Time dependent two dimensional models of plasma flows of two wide-spread types are considered: the flows across the magnetic field and those in the magnetic field plane.
ADGEN: ADjoint GENerator for computer models
Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.
1989-05-01
This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.
Computational fire modeling for aircraft fire research
Nicolette, V.F.
1996-11-01
This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.
Computational acoustic modeling of cetacean vocalizations
NASA Astrophysics Data System (ADS)
Gurevich, Michael Dixon
A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it
Computational Modeling in Structural Materials Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.
Ferrofluids: Modeling, numerical analysis, and scientific computation
NASA Astrophysics Data System (ADS)
Tomas, Ignacio
This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a
Computational Statistical Methods for Social Network Models
Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael
2013-01-01
We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720
A Neural Computational Model of Incentive Salience
Zhang, Jun; Berridge, Kent C.; Tindell, Amy J.; Smith, Kyle S.; Aldridge, J. Wayne
2009-01-01
Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered ‘wanting’ only by
Computational Model of Fluorine-20 Experiment
NASA Astrophysics Data System (ADS)
Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne
2015-10-01
The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Modeling Reality - How Computers Mirror Life
NASA Astrophysics Data System (ADS)
Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona
2005-01-01
The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.
Computer Model Used to Help Customize Medicine
NASA Technical Reports Server (NTRS)
Stauber, Laurel J.; Veris, Jenise
2001-01-01
Dr. Radhakrishnan, a researcher at the NASA Glenn Research Center, in collaboration with biomedical researchers at the Case Western Reserve University School of Medicine and Rainbow Babies and Children's Hospital, is developing computational models of human physiology that quantitate metabolism and its regulation, in both healthy and pathological states. These models can help predict the effects of stresses or interventions, such as drug therapies, and contribute to the development of customized medicine. Customized medical treatment protocols can give more comprehensive evaluations and lead to more specific and effective treatments for patients, reducing treatment time and cost. Commercial applications of this research may help the pharmaceutical industry identify therapeutic needs and predict drug-drug interactions. Researchers will be able to study human metabolic reactions to particular treatments while in different environments as well as establish more definite blood metabolite concentration ranges in normal and pathological states. These computational models may help NASA provide the background for developing strategies to monitor and safeguard the health of astronauts and civilians in space stations and colonies. They may also help to develop countermeasures that ameliorate the effects of both acute and chronic space exposure.
Databases for Computational Thermodynamics and Diffusion Modeling
NASA Astrophysics Data System (ADS)
Kattner, U. R.; Boettinger, W. J.; Morral, J. E.
2002-11-01
Databases for computational thermodynamics and diffusion modeling can be applied to predict phase diagrams for alloy design and alloy behavior during processing and service. Databases that are currently available to scientists, engineers and students need to be expanded and improved. The approach of the workshop was to first identify the database and information delivery tool needs of industry and education. The workshop format was a series of invited talks given to the group as a whole followed by general discussions of needs and benefits to provide a roadmap of future activities.
Some queuing network models of computer systems
NASA Technical Reports Server (NTRS)
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
A Computational Model of Multidimensional Shape
Liu, Xiuwen; Shi, Yonggang; Dinov, Ivo
2010-01-01
We develop a computational model of shape that extends existing Riemannian models of curves to multidimensional objects of general topological type. We construct shape spaces equipped with geodesic metrics that measure how costly it is to interpolate two shapes through elastic deformations. The model employs a representation of shape based on the discrete exterior derivative of parametrizations over a finite simplicial complex. We develop algorithms to calculate geodesics and geodesic distances, as well as tools to quantify local shape similarities and contrasts, thus obtaining a formulation that accounts for regional differences and integrates them into a global measure of dissimilarity. The Riemannian shape spaces provide a common framework to treat numerous problems such as the statistical modeling of shapes, the comparison of shapes associated with different individuals or groups, and modeling and simulation of shape dynamics. We give multiple examples of geodesic interpolations and illustrations of the use of the models in brain mapping, particularly, the analysis of anatomical variation based on neuroimaging data. PMID:21057668
Computational social dynamic modeling of group recruitment.
Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.
2004-01-01
The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
Computational modeling approaches in gonadotropin signaling.
Ayoub, Mohammed Akli; Yvinec, Romain; Crépieux, Pascale; Poupon, Anne
2016-07-01
Follicle-stimulating hormone and LH play essential roles in animal reproduction. They exert their function through binding to their cognate receptors, which belong to the large family of G protein-coupled receptors. This recognition at the plasma membrane triggers a plethora of cellular events, whose processing and integration ultimately lead to an adapted biological response. Understanding the nature and the kinetics of these events is essential for innovative approaches in drug discovery. The study and manipulation of such complex systems requires the use of computational modeling approaches combined with robust in vitro functional assays for calibration and validation. Modeling brings a detailed understanding of the system and can also be used to understand why existing drugs do not work as well as expected, and how to design more efficient ones. PMID:27165991
Computational models of intergroup competition and warfare.
Letendre, Kenneth; Abbott, Robert G.
2011-11-01
This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.
A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION
Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.
2014-01-01
SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365
Computer modeling of thermoelectric generator performance
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.; Shields, V.
1982-01-01
Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct
Electrode Models for Electric Current Computed Tomography
CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.
2016-01-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280
Computer Modeling of Non-Isothermal Crystallization
NASA Technical Reports Server (NTRS)
Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.
1996-01-01
A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.
Multiscale computational modelling of the heart
NASA Astrophysics Data System (ADS)
Smith, N. P.; Nickerson, D. P.; Crampin, E. J.; Hunter, P. J.
A computational framework is presented for integrating the electrical, mechanical and biochemical functions of the heart. Finite element techniques are used to solve the large-deformation soft tissue mechanics using orthotropic constitutive laws based in the measured fibre-sheet structure of myocardial (heart muscle) tissue. The reaction-diffusion equations governing electrical current flow in the heart are solved on a grid of deforming material points which access systems of ODEs representing the cellular processes underlying the cardiac action potential. Navier-Stokes equations are solved for coronary blood flow in a system of branching blood vessels embedded in the deforming myocardium and the delivery of oxygen and metabolites is coupled to the energy-dependent cellular processes. The framework presented here for modelling coupled physical conservation laws at the tissue and organ levels is also appropriate for other organ systems in the body and we briefly discuss applications to the lungs and the musculo-skeletal system. The computational framework is also designed to reach down to subcellular processes, including signal transduction cascades and metabolic pathways as well as ion channel electrophysiology, and we discuss the development of ontologies and markup language standards that will help link the tissue and organ level models to the vast array of gene and protein data that are now available in web-accessible databases.
Efficient gradient computation for dynamical models
Sengupta, B.; Friston, K.J.; Penny, W.D.
2014-01-01
Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182
Computational modeling of intraocular gas dynamics.
Noohi, P; Abdekhodaie, M J; Cheng, Y L
2015-01-01
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. PMID:26682529
Computational modeling of intraocular gas dynamics
NASA Astrophysics Data System (ADS)
Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.
2015-12-01
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.
Preliminary Phase Field Computational Model Development
Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep
2014-12-15
This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
ERIC Educational Resources Information Center
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
A computational model for dynamic vision
NASA Technical Reports Server (NTRS)
Moezzi, Saied; Weymouth, Terry E.
1990-01-01
This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.
Comprehensive silicon solar cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.
Modeling groundwater flow on massively parallel computers
Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.
1994-12-31
The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1992-01-01
The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.
Computational model of heterogeneous heating in melanin
NASA Astrophysics Data System (ADS)
Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.
2015-03-01
Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.
Computational modelling of microfluidic capillary breakup phenomena
NASA Astrophysics Data System (ADS)
Li, Yuan; Sprittles, James; Oliver, Jim
2013-11-01
Capillary breakup phenomena occur in microfluidic flows when liquid volumes divide. The fundamental process of breakup is a key factor in the functioning of a number of microfluidic devices such as 3D-Printers or Lab-on-Chip biomedical technologies. It is well known that the conventional model of breakup is singular as pinch-off is approached, but, despite this, theoretical predictions of the global flow on the millimetre-scale appear to agree well with experimental data, at least until the topological change. However, as one approaches smaller scales, where interfacial effects become more dominant, it is likely that such unphysical singularities will influence the global dynamics of the drop formation process. In this talk we develop a computational framework based on the finite element method capable of resolving diverse spatio-temporal scales for the axisymmetric breakup of a liquid jet, so that the pinch-off dynamics can be accurately captured. As well as the conventional model, we discuss the application of the interface formation model to this problem, which allows the pinch-off to be resolved singularity-free, and has already been shown to produce improved flow predictions for related ``singular'' capillary flows.
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
Computational Aspects of N-Mixture Models
Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S
2015-01-01
The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629
Computational model for protein unfolding simulation
NASA Astrophysics Data System (ADS)
Tian, Xu-Hong; Zheng, Ye-Han; Jiao, Xiong; Liu, Cai-Xing; Chang, Shan
2011-06-01
The protein folding problem is one of the fundamental and important questions in molecular biology. However, the all-atom molecular dynamics studies of protein folding and unfolding are still computationally expensive and severely limited by the time scale of simulation. In this paper, a simple and fast protein unfolding method is proposed based on the conformational stability analyses and structure modeling. In this method, two structure-based conditions are considered to identify the unstable regions of proteins during the unfolding processes. The protein unfolding trajectories are mimicked through iterative structure modeling according to conformational stability analyses. Two proteins, chymotrypsin inhibitor 2 (CI2) and α -spectrin SH3 domain (SH3) were simulated by this method. Their unfolding pathways are consistent with the previous molecular dynamics simulations. Furthermore, the transition states of the two proteins were identified in unfolding processes and the theoretical Φ values of these transition states showed significant correlations with the experimental data (the correlation coefficients are >0.8). The results indicate that this method is effective in studying protein unfolding. Moreover, we analyzed and discussed the influence of parameters on the unfolding simulation. This simple coarse-grained model may provide a general and fast approach for the mechanism studies of protein folding.
Gravothermal Star Clusters - Theory and Computer Modelling
NASA Astrophysics Data System (ADS)
Spurzem, Rainer
2010-11-01
In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.
Cummings, P. T.
2010-02-08
This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.
Getting Mental Models and Computer Models to Cooperate
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Roseborough, J.; Charney, L.; Mendel, M.
1984-01-01
A qualitative theory of supervisory control is outlined wherein the mental models of one or more human operators are related to the knowledge representations within automatic controllers (observers, estimators) and operator decision aids (expert systems, advice-givers). Methods of quantifying knowledge and the calibration of one knowledge representation to another (human, computer, or objective truth) are discussed. Ongoing experiments in the use of decision aids for exploring one's own objective function or exploring system constraints and control strategies are described.
Computational modeling of acute myocardial infarction.
Sáez, P; Kuhl, E
2016-01-01
Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size. PMID:26583449
Random matrix model of adiabatic quantum computing
Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.
2005-05-15
We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.
Computational and Modeling Strategies for Cell Motility
NASA Astrophysics Data System (ADS)
Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory
A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback
Computational modeling of composite material fires.
Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.
2010-10-01
condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.
AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS
Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang
2010-08-01
The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.
COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS
Mathur, M.P.; Freeman, Mark; Gera, Dinesh
2001-11-06
In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.
Computational modeling of solid oxide fuel cell
NASA Astrophysics Data System (ADS)
Penmetsa, Satish Kumar
In the ongoing search for alternative and environmentally friendly power generation facilities, the solid oxide fuel cell (SOFC) is considered one of the prime candidates for the next generation of energy conversion devices due to its capability to provide environmentally friendly and highly efficient power generation. Moreover, SOFCs are less sensitive to composition of fuel as compared to other types of fuel cells, and internal reforming of the hydrocarbon fuel cell can be performed because of higher operating temperature range of 700°C--1000°C. This allows us to use different types of hydrocarbon fuels in SOFCs. The objective of this study is to develop a three-dimensional computational model for the simulation of a solid oxide fuel cell unit to analyze the complex internal transport mechanisms and sensitivity of the cell with different operating conditions, and also to develop SOFC with higher operating current density with a more uniform gas distributions in the electrodes and with lower ohmic losses. This model includes mass transfer processes due to convection and diffusion in the gas flow channels based on the Navier-Stokes equations as well as combined diffusion and advection in electrodes using Brinkman's hydrodynamic equation and associated electrochemical reactions in the trilayer of the SOFC. Gas transport characteristics in terms of three-dimensional spatial distributions of reactant gases and their effects on electrochemical reactions at the electrode-electrolyte interface, and in the resulting polarizations, are evaluated for varying pressure conditions. Results show the significance of the Brinkman's hydrodynamic model in electrodes to achieve more uniform gas concentration distributions while using a higher operating pressure and over a higher range of operating current densities.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
Predictive Capability Maturity Model for computational modeling and simulation.
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
Precise orbit computation and sea surface modeling
NASA Technical Reports Server (NTRS)
Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.
1991-01-01
The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.
Structural computational modeling of RNA aptamers.
Xu, Xiaojun; Dickey, David D; Chen, Shi-Jie; Giangrande, Paloma H
2016-07-01
RNA aptamers represent an emerging class of biologics that can be easily adapted for personalized and precision medicine. Several therapeutic aptamers with desirable binding and functional properties have been developed and evaluated in preclinical studies over the past 25years. However, for the majority of these aptamers, their clinical potential has yet to be realized. A significant hurdle to the clinical adoption of this novel class of biologicals is the limited information on their secondary and tertiary structure. Knowledge of the RNA's structure would greatly facilitate and expedite the post-selection optimization steps required for translation, including truncation (to reduce costs of manufacturing), chemical modification (to enhance stability and improve safety) and chemical conjugation (to improve drug properties for combinatorial therapy). Here we describe a structural computational modeling methodology that when coupled to a standard functional assay, can be used to determine key sequence and structural motifs of an RNA aptamer. We applied this methodology to enable the truncation of an aptamer to prostate specific membrane antigen (PSMA) with great potential for targeted therapy that had failed previous truncation attempts. This methodology can be easily applied to optimize other aptamers with therapeutic potential. PMID:26972787
Structural computational modeling of RNA aptamers
Xu, Xiaojun; Dickey, David D.; Chen, Shi-Jie; Giangrande, Paloma H.
2016-01-01
RNA aptamers represent an emerging class of biologics that can be easily adapted for personalized and precision medicine. Several therapeutic aptamers with desirable binding and functional properties have been developed and evaluated in preclinical studies over the past 25 years. However, for the majority of these aptamers, their clinical potential has yet to be realized. A significant hurdle to the clinical adoption of this novel class of biologicals is the limited information on their secondary and tertiary structure. Knowledge of the RNA’s structure would greatly facilitate and expedite the post-selection optimization steps required for translation, including truncation (to reduce costs of manufacturing), chemical modification (to enhance stability and improve safety) and chemical conjugation (to improve drug properties for combinatorial therapy). Here we describe a structural computational modeling methodology that when coupled to a standard functional assay, can be used to determine key sequence and structural motifs of an RNA aptamer. We applied this methodology to enable the truncation of an aptamer to prostate specific membrane antigen (PSMA) with great potential for targeted therapy that had failed previous truncation attempts. This methodology can be easily applied to optimize other aptamers with therapeutic potential. PMID:26972787
Computational modeling of epidural cortical stimulation
NASA Astrophysics Data System (ADS)
Wongsarnpigoon, Amorn; Grill, Warren M.
2008-12-01
Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.; Olariu, Stephen
1995-01-01
The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.
ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING
The overall goal of the EPA-ORD NERL research program on Computational Toxicology (CompTox) is to provide the Agency with the tools of modern chemistry, biology, and computing to improve quantitative risk assessments and reduce uncertainties in the source-to-adverse outcome conti...
Learning Anatomy: Do New Computer Models Improve Spatial Understanding?
ERIC Educational Resources Information Center
Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian
1999-01-01
Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)
Computational Modeling of Magnetically Actuated Propellant Orientation
NASA Technical Reports Server (NTRS)
Hochstein, John I.
1996-01-01
sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.
Idealized computational models for auditory receptive fields.
Lindeberg, Tony; Friberg, Anders
2015-01-01
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973
Idealized Computational Models for Auditory Receptive Fields
Lindeberg, Tony; Friberg, Anders
2015-01-01
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973
Computer modeling of a convective steam superheater
NASA Astrophysics Data System (ADS)
Trojan, Marcin
2015-03-01
Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.
Ablative Rocket Deflector Testing and Computational Modeling
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Lott, Jeffrey W.; Raines, Nickey
2010-01-01
A deflector risk mitigation program was recently conducted at the NASA Stennis Space Center. The primary objective was to develop a database that characterizes the behavior of industry-grade refractory materials subjected to rocket plume impingement conditions commonly experienced on static test stands. The program consisted of short and long duration engine tests where the supersonic exhaust flow from the engine impinged on an ablative panel. Quasi time-dependent erosion depths and patterns generated by the plume impingement were recorded for a variety of different ablative materials. The erosion behavior was found to be highly dependent on the material s composition and corresponding thermal properties. For example, in the case of the HP CAST 93Z ablative material, the erosion rate actually decreased under continued thermal heating conditions due to the formation of a low thermal conductivity "crystallization" layer. The "crystallization" layer produced near the surface of the material provided an effective insulation from the hot rocket exhaust plume. To gain further insight into the complex interaction of the plume with the ablative deflector, computational fluid dynamic modeling was performed in parallel to the ablative panel testing. The results from the current study demonstrated that locally high heating occurred due to shock reflections. These localized regions of shock-induced heat flux resulted in non-uniform erosion of the ablative panels. In turn, it was observed that the non-uniform erosion exacerbated the localized shock heating causing eventual plume separation and reversed flow for long duration tests under certain conditions. Overall, the flow simulations compared very well with the available experimental data obtained during this project.
Performance Models for Split-execution Computing Systems
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena
2016-01-01
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.
An Integrative Model of Factors Related to Computing Course Performance.
ERIC Educational Resources Information Center
Charlton, John P.; Birkett, Paul E.
1999-01-01
A path-modeling approach is adopted to examine interrelationships between factors influencing computing behavior and computer course performance. Factors considered are gender, personality, intellect and computer attitudes, ownership, and experience. Intrinsic motivation is suggested as a major factor which can explain many variables' relationship…
Los Alamos CCS (Center for Computer Security) formal computer security model
Dreicer, J.S.; Hunteman, W.J. )
1989-01-01
This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell-Lapadula abstract sets of objects and subjects. 5 refs.
Integrated Multiscale Modeling of Molecular Computing Devices
Jerzy Bernholc
2011-02-03
will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205
PC BEEPOP - A PERSONAL COMPUTER HONEY BEE POPULATION DYNAMICS MODEL
PC BEEPOP is a computer model that simulates honey bee (Apis mellifera L.) colony population dynamics. he model consists of a system of interdependent elements, including colony condition, environmental variability, colony energetics, and contaminant exposure. t includes a mortal...
Computer Center: BASIC String Models of Genetic Information Transfer.
ERIC Educational Resources Information Center
Spain, James D., Ed.
1984-01-01
Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)
On the capabilities and computational costs of neuron models.
Skocik, Michael J; Long, Lyle N
2014-08-01
We review the Hodgkin-Huxley, Izhikevich, and leaky integrate-and-fire neuron models in regular spiking modes solved with the forward Euler, fourth-order Runge-Kutta, and exponential Euler methods and determine the necessary time steps and corresponding computational costs required to make the solutions accurate. We conclude that the leaky integrate-and-fire needs the least number of computations, and that the Hodgkin-Huxley and Izhikevich models are comparable in computational cost. PMID:25050945
Overview of ASC Capability Computing System Governance Model
Doebling, Scott W.
2012-07-11
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
Computer Simulation Models of Economic Systems in Higher Education.
ERIC Educational Resources Information Center
Smith, Lester Sanford
The increasing complexity of educational operations make analytical tools, such as computer simulation models, especially desirable for educational administrators. This MA thesis examined the feasibility of developing computer simulation models of economic systems in higher education to assist decision makers in allocating resources. The report…
Computers, Modeling and Management Education. Technical Report No. 6.
ERIC Educational Resources Information Center
Bonini, Charles P.
The report begins with a brief examination of the role of computer modeling in management decision-making. Then, some of the difficulties of implementing computer modeling are examined, and finally, the educational implications of these issues are raised, and a comparison is made between what is currently being done and what might be done to…
A Model for Guiding Undergraduates to Success in Computational Science
ERIC Educational Resources Information Center
Olagunju, Amos O.; Fisher, Paul; Adeyeye, John
2007-01-01
This paper presents a model for guiding undergraduates to success in computational science. A set of integrated, interdisciplinary training and research activities is outlined for use as a vehicle to increase and produce graduates with research experiences in computational and mathematical sciences. The model is responsive to the development of…
Using Computational Simulations to Confront Students' Mental Models
ERIC Educational Resources Information Center
Rodrigues, R.; Carvalho, P. Simeão
2014-01-01
In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…
World Knowledge in Computational Models of Discourse Comprehension
ERIC Educational Resources Information Center
Frank, Stefan L.; Koppen, Mathieu; Noordman, Leo G. M.; Vonk, Wietske
2008-01-01
Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, "ad hoc" selection, extraction from…
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....
Computational Modeling for Language Acquisition: A Tutorial with Syntactic Islands
ERIC Educational Resources Information Center
Pearl, Lisa S.; Sprouse, Jon
2015-01-01
Purpose: Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. Method: We provide a general…
Generate rigorous pyrolysis models for olefins production by computer
Klein, M.T.; Broadbelt, L.J.; Grittman, D.H.
1997-04-01
With recent advances in the automation of the model-building process for large networks of kinetic equations, it may become feasible to generate computer pyrolysis models for naphthas and gas oil feedstocks. The potential benefit of a rigorous mechanistic model for these relatively complex liquid feedstocks is great, due to diverse characterizations and yield spectrums. An ethane pyrolysis example is used to illustrate the computer generation of reaction mechanism models.
Ambient temperature modelling with soft computing techniques
Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo
2010-07-15
This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)
A computational model of the human hand 93-ERI-053
Hollerbach, K.; Axelrod, T.
1996-03-01
The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.
Computer Modeling and Research in the Classroom
ERIC Educational Resources Information Center
Ramos, Maria Joao; Fernandes, Pedro Alexandrino
2005-01-01
We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…
Computational and mathematical models of microstructural evolution
Bullard, J.W.; Chen, L.Q.; Kalia, R.K.; Stoneham, A.M.
1998-12-31
This symposium was designed to bring together the foremost materials theorists and applied mathematicians from around the world to share and discuss some of the newest and most promising mathematical and computational tools for simulating, understanding, and predicting the various complex processes that occur during the evolution of microstructures. Separate abstracts were prepared for 25 papers.
Computer modeling of ORNL storage tank sludge mobilization and mixing
Terrones, G.; Eyler, L.L.
1993-09-01
This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.
Operation of the computer model for microenvironment atomic oxygen exposure
NASA Technical Reports Server (NTRS)
Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.
1995-01-01
A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.
A stirling engine computer model for performance calculations
NASA Technical Reports Server (NTRS)
Tew, R.; Jefferies, K.; Miao, D.
1978-01-01
To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-01-01
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity. PMID:27130577
Ku-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Magnusson, H. G.; Goff, M. F.
1984-01-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
Computational Modeling of NEXT 2000-Hour Wear Test Results
NASA Technical Reports Server (NTRS)
Malone, Shane P.
2004-01-01
Ion optics computational models are invaluable tools for the design of ion optics systems. In this study, a new computational model developed by an outside vendor for NASA Glenn Research Center (GRC) is presented. This model is a gun code which has been modified to model the plasma sheaths both upstream and downstream of the ion optics. The model handles multiple species (e.g. singly and doubly-charged ions) and includes a charge-exchange model for erosion estimates. The model uses commercially available solid design and meshing software, allowing high flexibility in ion optics geometric configurations. This computational model is compared to experimental results from the NASA Evolutionary Xenon Thruster (NEXT) 2000-hour wear test, including over-focusing along the edge apertures, pit-and-groove erosion due to charge exchange, and beamlet distortion at the edge of the hole pattern.
Transient upset models in computer systems
NASA Technical Reports Server (NTRS)
Mason, G. M.
1983-01-01
Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.
Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors
NASA Technical Reports Server (NTRS)
Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.
2011-01-01
This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.
Recent developments in multiphysics computational models of physiological flows
NASA Astrophysics Data System (ADS)
Eldredge, Jeff D.; Mittal, Rajat
2016-04-01
A mini-symposium on computational modeling of fluid-structure interactions and other multiphysics in physiological flows was held at the 11th World Congress on Computational Mechanics in July 2014 in Barcelona, Spain. This special issue of Theoretical and Computational Fluid Dynamics contains papers from among the participants of the mini-symposium. The present paper provides an overview of the mini-symposium and the special issue.
Cancer Evolution: Mathematical Models and Computational Inference
Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.; Erdemir, Ahmet; Guess, Trent; Reinbolt, Jeff
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
Three-dimensional cardiac computational modelling: methods, features and applications.
Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M
2015-01-01
The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297
Computational model for Halorhodopsin photocurrent kinetics
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Stefanescu, Roxana; Talathi, Sachin
2013-03-01
Optogenetics is a rapidly developing novel optical stimulation technique that employs light activated ion channels to excite (using channelrhodopsin (ChR)) or suppress (using halorhodopsin (HR)) impulse activity in neurons with high temporal and spatial resolution. This technique holds enormous potential to externally control activity states in neuronal networks. The channel kinetics of ChR and HR are well understood and amenable for mathematical modeling. Significant progress has been made in recent years to develop models for ChR channel kinetics. To date however, there is no model to mimic photocurrents produced by HR. Here, we report the first model developed for HR photocurrents based on a four-state model of the HR photocurrent kinetics. The model provides an excellent fit (root-mean-square error of 3.1862x10-4, to an empirical profile of experimentally measured HR photocurrents. In combination, mathematical models for ChR and HR photocurrents can provide effective means to design test light based control systems to regulate neural activity, which in turn may have implications for the development of novel light based stimulation paradigms for brain disease control. I would like to thank the University of Florida and the Physics Research Experience for Undergraduates (REU) program, funded through NSF DMR-1156737. This research was also supported through start-up funds provided to Dr. Sachin Talathi
Advances in Computationally Modeling Human Oral Bioavailability
Wang, Junmei; Hou, Tingjun
2015-01-01
Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307
Advances in computationally modeling human oral bioavailability.
Wang, Junmei; Hou, Tingjun
2015-06-23
Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307
Several Computational Opportunities and Challenges Associated with Climate Change Modeling
Wang, Dali; Post, Wilfred M; Wilson, Bruce E
2010-01-01
One of the key factors in the improved understanding of climate science is the development and improvement of high fidelity climate models. These models are critical for projections of future climate scenarios, as well as for highlighting the areas where further measurement and experimentation are needed for knowledge improvement. In this paper, we focus on several computing issues associated with climate change modeling. First, we review a fully coupled global simulation and a nested regional climate model to demonstrate key design components, and then we explain the underlying restrictions associated with the temporal and spatial scale for climate change modeling. We then discuss the role of high-end computers in climate change sciences. Finally, we explain the importance of fostering regional, integrated climate impact analysis. Although we discuss the computational challenges associated with climate change modeling, and we hope those considerations can also be beneficial to many other modeling research programs involving multiscale system dynamics.
A computationally tractable version of the collective model
NASA Astrophysics Data System (ADS)
Rowe, D. J.
2004-05-01
A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.
Utilizing Cloud Computing to Improve Climate Modeling and Studies
NASA Astrophysics Data System (ADS)
Li, Z.; Yang, C.; Liu, K.; Sun, M.; XIA, J.; Huang, Q.
2013-12-01
Climate studies have become increasingly important due to the global climate change, one of the biggest challenges for the human in the 21st century. Climate data, not only observations data collected from various sensors but also simulated data generated from diverse climate models, are essential for scientists to explore the potential climate change patterns and analyze the complex climate dynamics. Climate modeling and simulation, a critical methodology for simulating the past and predicting the future climate conditions, can produce huge amount of data that contains potentially valuable information for climate studies. However, using modeling method in climate studies poses at least two challenges for scientists. First, running climate models is a computing intensive process, which requires large amounts of computation resources. Second, running climate models is also a data intensive process generating Big geospatial Data (model output), which demands large storage for managing the data and large computing power to process and analyze these data. This presentation introduces a novel framework to tackle the two challenges by 1) running climate models in a cloud environment in an automated fashion, and 2) managing and parallel processing Big model output Data by leveraging cloud computing technologies. A prototype system is developed based on the framework using ModelE as the climate model. Experiment results show that this framework can improve climate modeling in the research cycle by accelerating big data generation (model simulation), big data management (storage and processing) and on demand big data analytics.
A model for computing at the SSC (Superconducting Super Collider)
Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)
1990-06-01
High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.
Predictive Computational Modeling of Chromatin Folding
NASA Astrophysics Data System (ADS)
di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.
In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Computational Modeling of Healthy Myocardium in Diastole.
Nikou, Amir; Dorsey, Shauna M; McGarvey, Jeremy R; Gorman, Joseph H; Burdick, Jason A; Pilla, James J; Gorman, Robert C; Wenk, Jonathan F
2016-04-01
In order to better understand the mechanics of the heart and its disorders, engineers increasingly make use of the finite element method (FEM) to investigate healthy and diseased cardiac tissue. However, FEM is only as good as the underlying constitutive model, which remains a major challenge to the biomechanics community. In this study, a recently developed structurally based constitutive model was implemented to model healthy left ventricular myocardium during passive diastolic filling. This model takes into account the orthotropic response of the heart under loading. In-vivo strains were measured from magnetic resonance images (MRI) of porcine hearts, along with synchronous catheterization pressure data, and used for parameter identification of the passive constitutive model. Optimization was performed by minimizing the difference between MRI measured and FE predicted strains and cavity volumes. A similar approach was followed for the parameter identification of a widely used phenomenological constitutive law, which is based on a transversely isotropic material response. Results indicate that the parameter identification with the structurally based constitutive law is more sensitive to the assigned fiber architecture and the fit between the measured and predicted strains is improved with more realistic sheet angles. In addition, the structurally based model is capable of generating a more physiological end-diastolic pressure-volume relationship in the ventricle. PMID:26215308
Enhanced absorption cycle computer model. Final report
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.
COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION
In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Computer models and output, Spartan REM: Appendix B
NASA Technical Reports Server (NTRS)
Marlowe, D. S.; West, E. J.
1984-01-01
A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.
Computational Modeling, Formal Analysis, and Tools for Systems Biology
Bartocci, Ezio; Lió, Pietro
2016-01-01
As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science. PMID:26795950
Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.
ERIC Educational Resources Information Center
Baker, Richard
A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…
Computational model of miniature pulsating heat pipes.
Martinez, Mario J.; Givler, Richard C.
2013-01-01
The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.
Comprehension of simple quantifiers: empirical evaluation of a computational model.
Szymanik, Jakub; Zajenkowski, Marcin
2010-04-01
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality. In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon, the hypotheses and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity. PMID:21564222
Cogeneration computer model assessment: Advanced cogeneration research study
NASA Technical Reports Server (NTRS)
Rosenberg, L.
1983-01-01
Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.
Computational Models for Mechanics of Morphogenesis
Wyczalkowski, Matthew A.; Chen, Zi; Filas, Benjamen A.; Varner, Victor D.; Taber, Larry A.
2012-01-01
In the developing embryo, tissues differentiate, deform, and move in an orchestrated manner to generate various biological shapes driven by the complex interplay between genetic, epigenetic, and environmental factors. Mechanics plays a key role in regulating and controlling morphogenesis, and quantitative models help us understand how various mechanical forces combine to shape the embryo. Models allow for the quantitative, unbiased testing of physical mechanisms, and when used appropriately, can motivate new experimental directions. This knowledge benefits biomedical researchers who aim to prevent and treat congenital malformations, as well as engineers working to create replacement tissues in the laboratory. In this review, we first give an overview of fundamental mechanical theories for morphogenesis, and then focus on models for specific processes, including pattern formation, gastrulation, neurulation, organogenesis, and wound healing. The role of mechanical feedback in development is also discussed. Finally, some perspectives are given on the emerging challenges in morphomechanics and mechanobiology. PMID:22692887
Computational social network modeling of terrorist recruitment.
Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.
2004-10-01
The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.
Computer Models Simulate Fine Particle Dispersion
NASA Technical Reports Server (NTRS)
2010-01-01
Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.
Probabilistic computer model of optimal runway turnoffs
NASA Technical Reports Server (NTRS)
Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.
1985-01-01
Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.
Practical Use of Computationally Frugal Model Analysis Methods.
Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2016-03-01
Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts. PMID:25810333
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Trusting explanatory and exploratory models in computational geomorphology
NASA Astrophysics Data System (ADS)
Van De Wiel, Marco; Desjardins, Eric; Rousseau, Yannick; Martel, Tristan; Ashmore, Peter
2014-05-01
Computer simulations have become an increasingly important part of geomorphological investigation in the last decades. Simulations can be used not only to make specific predictions of the evolution of a geomorphic system (predictive modelling), but also to test theories and learn about geomorphic form and process in a timely and non destructive way (explanatory and exploratory modelling). The latter modes of modelling can be very useful for discovering spatial and temporal patterns, developing insights in the relation between form and process, and for understanding the causal structure of the physical landscape. But before we can have any hope that these type of simulations can effectively accomplish these tasks, simulationists must make the case that their computer modelling goes beyond mere numerical computation of theoretical idealization; that geomorphic investigation through computer modelling can play a similar role as field observation or laboratory experiment. Many of these explanatory and exploratory models are reduced-complexity models which exhibit a high degree of idealization and simplification. Moreover, they are often uncalibrated and untested on real geomorphic systems. Indeed, they are often used on idealized hypothetical landscapes, and sometimes are acknowledged not to be suitable for simulation of real systems. Does it make sense then to conceive of this type of computer modelling as a form of investigation capable of providing reliable knowledge about actual geomorphological phenomena? In this analysis it is argued that the traditional notion of establishing reliability or trustworthiness of models, i.e. a confirmation of predictive ability with regards to observed data, is not applicable to explanatory or exploratory modelling. Instead, trustworthiness of these models is established through broad qualitative conformity with known system dynamics, followed by a posteriori field and laboratory testing of hypotheses generated from the modelling
Computational Modeling Develops Ultra-Hard Steel
NASA Technical Reports Server (NTRS)
2007-01-01
Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.
Application of CDIO model in major courses for computer profession
NASA Astrophysics Data System (ADS)
Yang, Xiaowen; Han, Xie
2012-04-01
This paper introduces the structure of CDIO model. Through analyzing the existent problem from teaching of recent computer profession, the teaching reform of major courses for computer profession is necessary. So this paper imports the model to the teaching reform of major courses for computer profession as operating system. It integrates the teaching content of operating system, uses the teaching methods of case, task-driving and collaboration and improves the assessment system. The questionnaire inquiries from the teaching practice prove that the mode improves the understanding ability of students for theory and through experiment teaching exercises practice and innovation ability for students. The improved effect is evident.
Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization
NASA Astrophysics Data System (ADS)
Kamali, M.; Ponnambalam, K.; Soulis, E. D.
2007-07-01
In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
Scratch as a computational modelling tool for teaching physics
NASA Astrophysics Data System (ADS)
Lopez, Victor; Hernandez, Maria Isabel
2015-05-01
The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling programs. In this article, we briefly discuss why Scratch could be a useful tool for computational modelling in the primary or secondary physics classroom, and we present practical examples of how it can be used to build a model.
Computational modeling of nuclear thermal rockets
NASA Technical Reports Server (NTRS)
Peery, Steven D.
1993-01-01
The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.
Computational modeling of nonlinear electromagnetic phenomena
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Taflove, Allen
1992-01-01
A new algorithm has been developed that permits, for the first time, the direct time integration of the full-vector nonlinear Maxwell's equations. This new capability permits the modeling of linear and nonlinear, instantaneous and dispersive effects in the electric polarization material media. Results are presented of first-time calculations in 1D of the propagation and collision of femtosecond electromagnetic solitons that retain the optical carrier.
Computational modeling of leukocyte adhesion cascade (LAC)
NASA Astrophysics Data System (ADS)
Sarkar, Kausik
2005-11-01
In response to an inflammation in the body, leukocytes (white blood cell) interact with the endothelium (interior wall of blood vessel) through a series of steps--capture, rolling, adhesion and transmigration--critical for proper functioning of the immune system. We are numerically simulating this process using a Front-tracking finite-difference method. The viscoelastcity of the cell membrane, cytoplasm and nucleus are incorporated and allowed to change with time in response to the cell surface molecular chemistry. The molecular level forces due to specific ligand-receptor interactions are accounted for by stochastic spring-peeling model. Even though leukocyte rolling has been investigated through various models, the transitioning through subsequent steps, specifically firm adhesion and transmigration through endothelial layer, has not been modeled. The change of viscoelastic properties due to the leukocyte activation is observed to play a critical role in mediating the transition from rolling to transmigration. We will provide details of our approach and discuss preliminary results.
Integrated Multiscale Modeling of Molecular Computing Devices
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
Computer assisted modeling of ethyl sulfate pharmacokinetics.
Schmitt, Georg; Halter, Claudia C; Aderjan, Rolf; Auwaerter, Volker; Weinmann, Wolfgang
2010-01-30
For 12 volunteers of a drinking experiment the concentration-time-courses of ethyl sulfate (EtS) and ethanol were simulated and fitted to the experimental data. The concentration-time-courses were described with the same mathematical model as previously used for ethyl glucuronide (EtG). The kinetic model based on the following assumptions and simplifications: a velocity constant k(form) for the first order formation of ethyl sulfate from ethanol and an exponential elimination constant k(el). The mean values (and standard deviations) obtained for k(form) and k(el) were 0.00052 h(-1) (0.00014) and 0.561 h(-1) (0.131), respectively. Using the ranges of these parameters it is possible to calculate minimum and maximum serum concentrations of EtS based on stated ethanol doses and drinking times. The comparison of calculated and measured concentrations can prove the plausibility of alleged ethanol consumption and add evidence to the retrospective calculation of ethanol concentrations based on EtG concentrations. PMID:19913378
Computational models for the nonlinear analysis of reinforced concrete plates
NASA Technical Reports Server (NTRS)
Hinton, E.; Rahman, H. H. A.; Huq, M. M.
1980-01-01
A finite element computational model for the nonlinear analysis of reinforced concrete solid, stiffened and cellular plates is briefly outlined. Typically, Mindlin elements are used to model the plates whereas eccentric Timoshenko elements are adopted to represent the beams. The layering technique, common in the analysis of reinforced concrete flexural systems, is incorporated in the model. The proposed model provides an inexpensive and reasonably accurate approach which can be extended for use with voided plates.
Computer model of cardiovascular control system responses to exercise
NASA Technical Reports Server (NTRS)
Croston, R. C.; Rummel, J. A.; Kay, F. J.
1973-01-01
Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.
A cognitive model for problem solving in computer science
NASA Astrophysics Data System (ADS)
Parham, Jennifer R.
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specific domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the differences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software
Revisions to the hydrogen gas generation computer model
Jerrell, J.W.
1992-08-31
Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
Computational Neuroscience: Modeling the Systems Biology of Synaptic Plasticity
Kotaleski, Jeanette Hellgren; Blackwell, Kim T.
2016-01-01
Preface Synaptic plasticity is a mechanism proposed to underlie learning and memory. The complexity of the interactions between ion channels, enzymes, and genes involved in synaptic plasticity impedes a deep understanding of this phenomenon. Computer modeling is an approach to investigate the information processing that is performed by signaling pathways underlying synaptic plasticity. In the past few years, new software developments that blend computational neuroscience techniques with systems biology techniques have allowed large-scale, quantitative modeling of synaptic plasticity in neurons. We highlight significant advancements produced by these modeling efforts and introduce promising approaches that utilize advancements in live cell imaging. PMID:20300102
Optical computing based on neuronal models
NASA Astrophysics Data System (ADS)
Farhat, Nabil H.
1987-10-01
Ever since the fit between what neural net models can offer (collective, iterative, nonlinear, robust, and fault-tolerant approach to information processing) and the inherent capabilities of optics (parallelism and massive interconnectivity) was first pointed out and the first optical associative memory demonstrated in 1985, work and interest in neuromorphic optical signal processing has been growing steadily. For example, work in optical associative memories is currently being conducted at several academic institutions (e.g., California Institute of Technology, University of Colorado, University of California-San Diego, Stanford University, University of Rochester, and the author's own institution the University of Pennsylvania) and at several industrial and governmental laboratories (e.g., Hughes Research Laboratories - Malibu, the Naval Research Laboratory, and the Jet Propulsion Laboratory). In these efforts, in addition to the vector matrix multiplication with thresholding and feedback scheme utilized in early implementations, an arsenal of sophisticated optical tools such as holographic storage, phase conjugate optics, and wavefront modulation and mixing are being drawn on to realize associative memory functions.
A computer model for predicting grapevine cold hardiness
Technology Transfer Automated Retrieval System (TEKTRAN)
We developed a robust computer model of grapevine bud cold hardiness that will aid in the anticipation of and response to potential injury from fluctuations in winter temperature and from extreme cold events. The model uses time steps of 1 day along with the measured daily mean air temperature to ca...
Industry-Wide Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
Shabbir, Aamir (Compiler)
1995-01-01
This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.
SHAWNEE FLUE GAS DESULFURIZATION COMPUTER MODEL USERS MANUAL
The manual describes a Shawnee flue gas desulfurization (FGD) computer model and gives detailed instructions for its use. The model, jointly developed by Bechtel National, Inc. and TVA (in conjunction with the EPA-sponsored Shawnee test program), is capable of projecting prelimin...
Computer Modelling of Biological Molecules: Free Resources on the Internet.
ERIC Educational Resources Information Center
Millar, Neil
1996-01-01
Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet. (Author/JRH)
Modeling civil violence: An agent-based computational approach
Epstein, Joshua M.
2002-01-01
This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450
The Problem Cycle: A Model for Computer Education.
ERIC Educational Resources Information Center
Lloyd, Margaret
1998-01-01
Presents a model based on a bicycle metaphor for teaching problem solving and computer-mediated solutions, discusses its implications for education, and lists ten strategies applied using the model. Provides a unit plan on using spreadsheets to explore Australian convict history, outlining establishment, development, interpretation, and…
LIME SPRAY DRYER FLUE GAS DESULFURIZATION COMPUTER MODEL USERS MANUAL
The report describes a lime spray dryer/baghouse (FORTRAN) computer model that simulates SO2 removal and permits study of related impacts on design and economics as functions of design parameters and operating conditions for coal-fired electric generating units. The model allows ...
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
ERIC Educational Resources Information Center
Pallant, Amy; Lee, Hee-Sun
2015-01-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…
Computational 3-D Model of the Human Respiratory System
We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...
Operation of the computer model for microenvironment solar exposure
NASA Technical Reports Server (NTRS)
Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.
1995-01-01
A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.
Computational science: shifting the focus from tools to models
Hinsen, Konrad
2014-01-01
Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728
Computational fluid dynamics modeling for emergency preparedness & response
Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.
1995-07-01
Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.
Revisions to the hydrogen gas generation computer model
Jerrell, J.W.
1992-08-31
Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Computer modeling of events in the inner magnetosphere
NASA Technical Reports Server (NTRS)
Harel, M.; Wolf, R. A.; Reiff, P. H.; Smiddy, M.
1979-01-01
The first effort at computer simulating the behavior of the inner magnetosphere during a substorm-type event on 19 September 1976 was completed. The computer model simulates many aspects of the behavior of the closed-field-line portion of the earth's magnetosphere, and the auroral and subauroral ionosphere. For these regions, the program self-consistently computes electric fields, electric currents, hot-plasma densities, plasma flow velocities and other parameters. Highlights of the results of our event simulation are presented. Predicted electric fields for several times during the event agree reasonably well with corresponding data from satellite S3-2. Detailed discussion is presented for a case of rapid subauroral flow that was observed on one S3-2 pass and is predicted by the computer runs. The computed global distribution of Birkeland current agrees reasonably well with the observations of Iijima and Potemra.
Integrated Multiscale Modeling of Molecular Computing Devices
Weinan E
2012-03-29
The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.
A sticker-based model for DNA computation.
Roweis, S; Winfree, E; Burgoyne, R; Chelyapov, N V; Goodman, M F; Rothemund, P W; Adleman, L M
1998-01-01
We introduce a new model of molecular computation that we call the sticker model. Like many previous proposals it makes use of DNA strands as the physical substrate in which information is represented and of separation by hybridization as a central mechanism. However, unlike previous models, the stickers model has a random access memory that requires no strand extension and uses no enzymes; also (at least in theory), its materials are reusable. The paper describes computation under the stickers model and discusses possible means for physically implementing each operation. Finally, we go on to propose a specific machine architecture for implementing the stickers model as a microprocessor-controlled parallel robotic workstation. In the course of this development a number of previous general concerns about molecular computation (Smith, 1996; Hartmanis, 1995; Linial et al., 1995) are addressed. First, it is clear that general-purpose algorithms can be implemented by DNA-based computers, potentially solving a wide class of search problems. Second, we find that there are challenging problems, for which only modest volumes of DNA should suffice. Third, we demonstrate that the formation and breaking of covalent bonds is not intrinsic to DNA-based computation. Fourth, we show that a single essential biotechnology, sequence-specific separation, suffices for constructing a general-purpose molecular computer. Concerns about errors in this separation operation and means to reduce them are addressed elsewhere (Karp et al., 1995; Roweis and Winfree, 1999). Despite these encouraging theoretical advances, we emphasize that substantial engineering challenges remain at almost all stages and that the ultimate success or failure of DNA computing will certainly depend on whether these challenges can be met in laboratory investigations. PMID:10072080
Methodology for characterizing modeling and discretization uncertainties in computational simulation
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
COMPUTATIONAL MODELING ISSUES IN NEXT GENERATION AIR QUALITY MODELS
EPA's Atmospheric Research and Exposure Assessment Laboratory is leading a major effort to advance urban/regional multi-pollutant air quality modeling through development of a third-generation modeling system, Models-3. he Models-3 system is being developed within a high-performa...
Computational fluid dynamics modeling for emergency preparedness and response
Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.
1995-02-01
Computational fluid dynamics (CFD) has (CFD) has played an increasing in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-couple equations whose solution requires a significant level of computational power. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows. We are now embarking on a development program to incorporate prognostic models to generate, in real-time, the meteorological fields for the dispersion models. In contrast to diagnostic models, prognostic models are physically-based and are capable of incorporating many physical processes to treat highly complex flow scenarios.
Computational modelling of memory retention from synapse to behaviour
NASA Astrophysics Data System (ADS)
van Rossum, Mark C. W.; Shippi, Maria
2013-03-01
One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.
Identification of Computational and Experimental Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.
Special Issue: Big data and predictive computational modeling
NASA Astrophysics Data System (ADS)
Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.
2016-09-01
The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.
A computational approach to developing mathematical models of polyploid meiosis.
Rehmsmeier, Marc
2013-04-01
Mathematical models of meiosis that relate offspring to parental genotypes through parameters such as meiotic recombination frequency have been difficult to develop for polyploids. Existing models have limitations with respect to their analytic potential, their compatibility with insights into mechanistic aspects of meiosis, and their treatment of model parameters in terms of parameter dependencies. In this article I put forward a computational approach to the probabilistic modeling of meiosis. A computer program enumerates all possible paths through the phases of replication, pairing, recombination, and segregation, while keeping track of the probabilities of the paths according to the various parameters involved. Probabilities for classes of genotypes or phenotypes are added, and the resulting formulas are simplified by the symbolic-computation system Mathematica. An example application to autotetraploids results in a model that remedies the limitations of previous models mentioned above. In addition to the immediate implications, the computational approach presented here can be expected to be useful through opening avenues for modeling a host of processes, including meiosis in higher-order ploidies. PMID:23335332
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts
The GOURD model of human-computer interaction
Goldbogen, G.
1996-12-31
This paper presents a model, the GOURD model, that can be used to measure the goodness of {open_quotes}interactivity{close_quotes} of an interface design and qualifies how to improve the design. The GOURD model describes what happens to the computer and to the human during a human-computer interaction. Since the interaction is generally repeated, the traversal of the model repeatedly is similar to a loop programming structure. Because the model measures interaction over part or all of the application, it can also be used as a classifier of the part or the whole application. But primarily, the model is used as a design guide and a predictor of effectiveness.
A Whole-Cell Computational Model Predicts Phenotype from Genotype
Karr, Jonathan R.; Sanghvi, Jayodita C.; Macklin, Derek N.; Gutschow, Miriam V.; Jacobs, Jared M.; Bolival, Benjamin; Assad-Garcia, Nacyra; Glass, John I.; Covert, Markus W.
2012-01-01
SUMMARY Understanding how complex phenotypes arise from individual molecules and their interactions is a primary challenge in biology that computational approaches are poised to tackle. We report a whole-cell computational model of the life cycle of the human pathogen Mycoplasma genitalium that includes all of its molecular components and their interactions. An integrative approach to modeling that combines diverse mathematics enabled the simultaneous inclusion of fundamentally different cellular processes and experimental measurements. Our whole-cell model accounts for all annotated gene functions and was validated against a broad range of data. The model provides insights into many previously unobserved cellular behaviors, including in vivo rates of protein-DNA association and an inverse relationship between the durations of DNA replication initiation and replication rates. In addition, experimental analysis directed by model predictions identified previously undetected kinetic parameters and biological functions. We conclude that comprehensive whole-cell models can be used to facilitate biological discovery. PMID:22817898
Using GPUs to Meet Next Generation Weather Model Computational Requirements
NASA Astrophysics Data System (ADS)
Govett, M.; Hart, L.; Henderson, T.; Middlecoff, J.; Tierney, C.
2008-12-01
Weather prediction goals within the Earth Science Research Laboratory at NOAA require significant increases in model resolution (~1 km) and forecast durations (~60 days) to support expected requirements in 5 years or less. However, meeting these goals will likely require at least 100k dedicated cores. Few systems will exist that could even run such a large problem, much less house a facility that could provide the necessary power and cooling requirements. To meet our goals we are exploring alternative technologies, including Graphics Processing Units (GPU), that could provide significantly more computational performance and reduced power and cooling requirements, at a lower cost than traditional high-performance computing solutions. Our current global numerical weather prediction model, the Flow following finite-volume Isocahedral Model (FIM, http://fim.noaa.gov), is still early in its development but is already demonstrating good fidelity and excellent scalability to 1000s of cores. The icosahedral grid has several complexities not present in more traditional Cartesian grids including polygons with different numbers of sides (five and six) and non-trivial computation of locations of neighboring grid cells. FIM uses an indirect addressing scheme that yields very compact code despite these complexities. We have extracted computational kernels that encompass functions likely to take the most time at higher resolutions including all that have horizontal dependencies. Kernels implement equations for computing anti-diffusive flux-corrected transport across cell edges, calculating forcing terms and time-step differencing, and re-computing time-dependent vertical coordinates. We are extending these kernels to explore performance of GPU-specific optimizations. We will present initial performance results from the computational kernels of the FIM model, as well as the challenges related to porting code with indirect memory references to the NVIDIA GPUs. Results of this
A computer simulation model to compute the radiation transfer of mountainous regions
NASA Astrophysics Data System (ADS)
Li, Yuguang; Zhao, Feng; Song, Rui
2011-11-01
In mountainous regions, the radiometric signal recorded at the sensor depends on a number of factors such as sun angle, atmospheric conditions, surface cover type, and topography. In this paper, a computer simulation model of radiation transfer is designed and evaluated. This model implements the Monte Carlo ray-tracing techniques and is specifically dedicated to the study of light propagation in mountainous regions. The radiative processes between sun light and the objects within the mountainous region are realized by using forward Monte Carlo ray-tracing methods. The performance of the model is evaluated through detailed comparisons with the well-established 3D computer simulation model: RGM (Radiosity-Graphics combined Model) based on the same scenes and identical spectral parameters, which shows good agreements between these two models' results. By using the newly developed computer model, series of typical mountainous scenes are generated to analyze the physical mechanism of mountainous radiation transfer. The results show that the effects of the adjacent slopes are important for deep valleys and they particularly affect shadowed pixels, and the topographic effect needs to be considered in mountainous terrain before accurate inferences from remotely sensed data can be made.
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
Virtual Cell: computational tools for modeling in cell biology
Resasco, Diana C.; Gao, Fei; Morgan, Frank; Novak, Igor L.; Schaff, James C.; Slepchenko, Boris M.
2011-01-01
The Virtual Cell (VCell) is a general computational framework for modeling physico-chemical and electrophysiological processes in living cells. Developed by the National Resource for Cell Analysis and Modeling at the University of Connecticut Health Center, it provides automated tools for simulating a wide range of cellular phenomena in space and time, both deterministically and stochastically. These computational tools allow one to couple electrophysiology and reaction kinetics with transport mechanisms, such as diffusion and directed transport, and map them onto spatial domains of various shapes, including irregular three-dimensional geometries derived from experimental images. In this article, we review new robust computational tools recently deployed in VCell for treating spatially resolved models. PMID:22139996
Computational Motion Phantoms and Statistical Models of Respiratory Motion
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian
Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.
A computational model of the human visual cortex
NASA Astrophysics Data System (ADS)
Albus, James S.
2008-04-01
The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The computational model proposed here assumes that this internal representation resides in arrays of cortical columns. More specifically, it models each cortical hypercolumn together with its underlying thalamic nuclei as a Fundamental Computational Unit (FCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it. In sensory-processing areas of the brain, FCUs enable segmentation, grouping, and classification. Pointers stored in FCU frames link pixels and signals to objects and events in situations and episodes that are overlaid with meaning and emotional values. In behavior-generating areas of the brain, FCUs make decisions, set goals and priorities, generate plans, and control behavior. Pointers are used to define rules, grammars, procedures, plans, and behaviors. It is suggested that it may be possible to reverse engineer the human brain at the FCU level of fidelity using nextgeneration massively parallel computer hardware and software. Key Words: computational modeling, human cortex, brain modeling, reverse engineering the brain, image processing, perception, segmentation, knowledge representation
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
Computational ocean acoustics: Advances in 3D ocean acoustic modeling
NASA Astrophysics Data System (ADS)
Schmidt, Henrik; Jensen, Finn B.
2012-11-01
The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].
Optimal control of a delayed SLBS computer virus model
NASA Astrophysics Data System (ADS)
Chen, Lijuan; Hattaf, Khalid; Sun, Jitao
2015-06-01
In this paper, a delayed SLBS computer virus model is firstly proposed. To the best of our knowledge, this is the first time to discuss the optimal control of the SLBS model. By using the optimal control strategy, we present an optimal strategy to minimize the total number of the breakingout computers and the cost associated with toxication or detoxication. We show that an optimal control solution exists for the control problem. Some examples are presented to show the efficiency of this optimal control.
Mathematical modelling in the computer-aided process planning
NASA Astrophysics Data System (ADS)
Mitin, S.; Bochkarev, P.
2016-04-01
This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.
Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy
Kaddi, Chanchala D; Phan, John H; Wang, May D
2016-01-01
Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967
A mathematical model of a computational problem solving system
NASA Astrophysics Data System (ADS)
Aris, Teh Noranis Mohd; Nazeer, Shahrin Azuan
2015-05-01
This paper presents a mathematical model based on fuzzy logic for a computational problem solving system. The fuzzy logic uses truth degrees as a mathematical model to represent vague algorithm. The fuzzy logic mathematical model consists of fuzzy solution and fuzzy optimization modules. The algorithm is evaluated based on a software metrics calculation that produces the fuzzy set membership. The fuzzy solution mathematical model is integrated in the fuzzy inference engine that predicts various solutions to computational problems. The solution is extracted from a fuzzy rule base. Then, the solutions are evaluated based on a software metrics calculation that produces the level of fuzzy set membership. The fuzzy optimization mathematical model is integrated in the recommendation generation engine that generate the optimize solution.
Computational approaches to parameter estimation and model selection in immunology
NASA Astrophysics Data System (ADS)
Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.
2005-12-01
One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.
Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo
Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.
2000-10-10
Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.
A Cognitive Computational Model Inspired by the Immune System Response
Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim
2014-01-01
The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective. PMID:25003131
Simulation models for computational plasma physics: Concluding report
Hewett, D.W.
1994-03-05
In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell`s equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture`s (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993).
Parallel Computation of Airflow in the Human Lung Model
NASA Astrophysics Data System (ADS)
Lee, Taehun; Tawhai, Merryn; Hoffman, Eric. A.
2005-11-01
Parallel computations of airflow in the human lung based on domain decomposition are performed. The realistic lung model is segmented and reconstructed from CT images as part of an effort to build a normative atlas (NIH HL-04368) documenting airway geometry over 4 decades of age in healthy and disease-state adult humans. Because of the large number of the airway generation and the sheer complexity of the geometry, massively parallel computation of pulmonary airflow is carried out. We present the parallel algorithm implemented in the custom-developed characteristic-Galerkin finite element method, evaluate the speed-up and scalability of the scheme, and estimate the computing resources needed to simulate the airflow in the conducting airways of the human lungs. It is found that the special tree-like geometry enables the inter-processor communications to occur among only three or four processors for optimal parallelization irrespective of the number of processors involved in the computation.
Eslami, Parastou; Seo, Jung-Hee; Rahsepar, Amir Ali; George, Richard; Lardo, Albert C; Mittal, Rajat
2015-09-01
Recent computed tomography coronary angiography (CCTA) studies have noted higher transluminal contrast agent gradients in arteries with stenotic lesions, but the physical mechanism responsible for these gradients is not clear. We use computational fluid dynamics (CFD) modeling coupled with contrast agent dispersion to investigate the mechanism for these gradients. Simulations of blood flow and contrast agent dispersion in models of coronary artery are carried out for both steady and pulsatile flows, and axisymmetric stenoses of severities varying from 0% (unobstructed) to 80% are considered. Simulations show the presence of measurable gradients with magnitudes that increase monotonically with stenotic severity when other parameters are held fixed. The computational results enable us to examine and validate the hypothesis that transluminal contrast gradients (TCG) are generated due to the advection of the contrast bolus with time-varying contrast concentration that appears at the coronary ostium. Since the advection of the bolus is determined by the flow velocity in the artery, the magnitude of the gradient, therefore, encodes the coronary flow velocity. The correlation between the flow rate estimated from TCG and the actual flow rate in the computational model of a physiologically realistic coronary artery is 96% with a R2 value of 0.98. The mathematical formulae connecting TCG to flow velocity derived here represent a novel and potentially powerful approach for noninvasive estimation of coronary flow velocity from CT angiography. PMID:26102356
Increasing computational efficiency of cochlear models using boundary layers
NASA Astrophysics Data System (ADS)
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Transitioning a unidirectional composite computer model from mesoscale to continuum
NASA Astrophysics Data System (ADS)
Chocron, Sidney; Zaera, Ramón; Walker, James; Brill, Alon; Kositski, Roman; Havazelet, Doron; Heisserer, Ulrich; van der Werff, Harm
2015-09-01
Ballistic impact on composites has been a challenging problem as seen in the abundant literature about the subject. Continuum models usually cannot properly predict deflection history on the back of the target while at the same time giving reasonable ballistic limits. According to the authors the main reason is that, while continuum models are very good at reproducing the elastic characteristics of the laminate, the models do not capture the behaviour of the "failed" material. A "failed" composite can still be very effective in stopping a projectile, because it can behave very similar to a dry woven fabric. The failure aspect is much easier to capture realistically with a mesoscale model. These models explicitly contain yarns and matrix allowing the matrix to fail while the yarns stay intact and continue to offer resistance to the projectile. This paper summarizes the work performed by the authors on the computationally expensive mesoscale models and, using them as benchmark computations, describes the first steps towards obtaining more computationally effective models that still keep the right physics of the impact.
Computation with spin foam models of quantum gravity
NASA Astrophysics Data System (ADS)
Khavkine, Igor
The focus of this thesis is the study of spin foam models of quantum gravity on a computer. These models include the standard Barrett-Crane (BC) spin foam model, as well as the new Engle-Pereira-Rovelli (EPR) and Freidel-Krasnov (FK) models. New numerical algorithms are developed and implemented, based on the existing Christensen-Egan (CE) algorithm, to allow computations with the BC model in the presence of a cosmological constant (implemented through q-deformation) and to allow computations with the recently proposed EPR and FK models. For the first time, we show that the inclusion of a positive cosmological constant, a long standing open problem for spin foams, curiously changes the behavior of the BC model, rendering the expectation values of its observables discontinuous in the limit of zero cosmological constant. Also, unlike previous work, this investigation was carried out on large triangulations, which are Gloser to large semiclassical space-times. Efficient numerical algorithms are described and implemented, for the first time, allowing the evaluation of the EPR and FK spin foam vertex amplitudes. An initial application of these algorithms is the study of the effective single vertex large spin asymptotics of the new models. Their asymptotic behavior is found to be qualitatively similar to that of the BC model. The leading asymptotic behavior does not exhibit the oscillatory character expected by analogy with the Ponzano-Regge model. Two important tests of the spin foam semiclassical limit are wave packet propagation and evaluation of the graviton propagator matrix elements. These tests are generalized to encompass the three major spin foam models. The wave packet propagation test is carried out in greater generality than previously. The results indicate that conjectures about good semiclassical behavior of the new spin foam models may have been premature.
Computational Flow Modeling of Human Upper Airway Breathing
NASA Astrophysics Data System (ADS)
Mylavarapu, Goutham
Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady
Developing a computational model of human hand kinetics using AVS
Abramowitz, Mark S.
1996-05-01
As part of an ongoing effort to develop a finite element model of the human hand at the Institute for Scientific Computing Research (ISCR), this project extended existing computational tools for analyzing and visualizing hand kinetics. These tools employ a commercial, scientific visualization package called AVS. FORTRAN and C code, originally written by David Giurintano of the Gillis W. Long Hansen`s Disease Center, was ported to a different computing platform, debugged, and documented. Usability features were added and the code was made more modular and readable. When the code is used to visualize bone movement and tendon paths for the thumb, graphical output is consistent with expected results. However, numerical values for forces and moments at the thumb joints do not yet appear to be accurate enough to be included in ISCR`s finite element model. Future work includes debugging the parts of the code that calculate forces and moments and verifying the correctness of these values.
Computation of large molecules with the hartree-fock model.
Clementi, E
1972-10-01
The usual way to compute Hartree-Fock type functions for molecules is by an expansion of the one-electron functions (molecular orbitals) in a linear combination of analytical functions (LCAO-MO-SCF, linear combination of atomic orbitals-Molecular Orbital-Self Consistent field). The expansion coefficients are obtained variationally. This technique requires the computation of several multicenter two-electron integrals (representing the electron-electron interaction) proportional to the fourth power of the basis set size. There are several types of basis sets; the Gaussian type introduced by S. F. Boys is used herein. Since it requires from a minimum of 10 (or 15) Gaussian-type functions to about 25 (or 30) Gaussian functions to describe a second-row atom in a molecule, the fourth power dependency of the basis set has been the de facto bottleneck of quantum chemical computations in the last decade.In this paper, the concept is introduced of a "dynamical" basis set, which allows for drastic computational simplifications while retaining full numerical accuracy. Examples are given that show that computational saving in computer time of more than a factor of one hundred is achieved and that large basis sets (up to the order of several hundred Gaussian functions per molecule) can be used routinely.It is noted that the limitation in the Hartree-Fock energy (correlation energy error) can be easily computed by use of a statistical model introduced by Wigner for solid-state systems in 1934.Thus, large molecules can now be simulated by computational techniques without reverting to semi-empirical parameterization and without requiring enormous computational time and storage. PMID:16592020
Computer-integrated finite element modeling of human middle ear.
Sun, Q; Gan, R Z; Chang, K-H; Dormer, K J
2002-10-01
The objective of this study was to produce an improved finite element (FE) model of the human middle ear and to compare the model with human data. We began with a systematic and accurate geometric modeling technique for reconstructing the middle ear from serial sections of a freshly frozen temporal bone. A geometric model of a human middle ear was constructed in a computer-aided design (CAD) environment with particular attention to geometry and microanatomy. Using the geometric model, a working FE model of the human middle ear was created using previously published material properties of middle ear components. This working FE model was finalized by a cross-calibration technique, comparing its predicted stapes footplate displacements with laser Doppler interferometry measurements from fresh temporal bones. The final FE model was shown to be reasonable in predicting the ossicular mechanics of the human middle ear. PMID:14595544
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Computer models of complex multiloop branched pipeline systems
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.
2013-11-01
This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.
Cartographic Modeling: Computer-assisted Analysis of Spatially Defined Neighborhoods
NASA Technical Reports Server (NTRS)
Berry, J. K.; Tomlin, C. D.
1982-01-01
Cartographic models addressing a wide variety of applications are composed of fundamental map processing operations. These primitive operations are neither data base nor application-specific. By organizing the set of operations into a mathematical-like structure, the basis for a generalized cartographic modeling framework can be developed. Among the major classes of primitive operations are those associated with reclassifying map categories, overlaying maps, determining distance and connectivity, and characterizing cartographic neighborhoods. The conceptual framework of cartographic modeling is established and techniques for characterizing neighborhoods are used as a means of demonstrating some of the more sophisticated procedures of computer-assisted map analysis. A cartographic model for assessing effective roundwood supply is briefly described as an example of a computer analysis. Most of the techniques described have been implemented as part of the map analysis package developed at the Yale School of Forestry and Environmental Studies.
Updated Computational Model of Cosmic Rays Near Earth
NASA Technical Reports Server (NTRS)
ONeill, Patrick M.
2006-01-01
An updated computational model of the galactic-cosmic-ray (GCR) environment in the vicinity of the Earth, Earth s Moon, and Mars has been developed, and updated software has been developed to implement the updated model. This model accounts for solar modulation of the cosmic-ray contribution for each element from hydrogen through iron by computationally propagating the local interplanetary spectrum of each element through the heliosphere. The propagation is effected by solving the Fokker-Planck diffusion, convection, energy-loss boundary-value problem. The Advanced Composition Explorer NASA satellite has provided new data on GCR energy spectra. These new data were used to update the original model and greatly improve the accuracy of prediction of interplanetary GCR.
A Computer Model for Analyzing Volatile Removal Assembly
NASA Technical Reports Server (NTRS)
Guo, Boyun
2010-01-01
A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.
A Computational Model of Linguistic Humor in Puns.
Kao, Justine T; Levy, Roger; Goodman, Noah D
2016-07-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. PMID:26235596
Understanding Parkinsonian handwriting through a computational model of basal ganglia.
Gangadhar, Garipelli; Joseph, Denny; Chakravarthy, V Srinivasa
2008-10-01
Handwriting in Parkinson's disease (PD) is typically characterized by micrographia, jagged line contour, and unusual fluctuations in pen tip velocity. Although PD handwriting features have been used for diagnostics, they are not based on a signaling model of basal ganglia (BG). In this letter, we present a computational model of handwriting generation that highlights the role of BG. When PD conditions like reduced dopamine and altered dynamics of the subthalamic nucleus and globus pallidus externa subsystems are simulated, the handwriting produced by the model manifested characteristic PD handwriting distortions like micrographia and velocity fluctuations. Our approach to PD modeling is in tune with the perspective that PD is a dynamic disease. PMID:18386983
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
Computational modeling of multispectral remote sensing systems: Background investigations
NASA Technical Reports Server (NTRS)
Aherron, R. M.
1982-01-01
A computational model of the deterministic and stochastic process of remote sensing has been developed based upon the results of the investigations presented. The model is used in studying concepts for improving worldwide environment and resource monitoring. A review of various atmospheric radiative transfer models is presented as well as details of the selected model. Functional forms for spectral diffuse reflectance with variability introduced are also presented. A cloud detection algorithm and the stochastic nature of remote sensing data with its implications are considered.
Parameter estimation and error analysis in environmental modeling and computation
NASA Technical Reports Server (NTRS)
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Retrospective Study on Mathematical Modeling Based on Computer Graphic Processing
NASA Astrophysics Data System (ADS)
Zhang, Kai Li
Graphics & image making is an important field in computer application, in which visualization software has been widely used with the characteristics of convenience and quick. However, it was thought by modeling designers that the software had been limited in it's function and flexibility because mathematics modeling platform was not built. A non-visualization graphics software appearing at this moment enabled the graphics & image design has a very good mathematics modeling platform. In the paper, a polished pyramid is established by multivariate spline function algorithm, and validate the non-visualization software is good in mathematical modeling.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
Computer modeling for advanced life support system analysis.
Drysdale, A
1997-01-01
This article discusses the equivalent mass approach to advanced life support system analysis, describes a computer model developed to use this approach, and presents early results from modeling the NASA JSC BioPlex. The model is built using an object oriented approach and G2, a commercially available modeling package Cost factor equivalencies are given for the Volosin scenarios. Plant data from NASA KSC and Utah State University (USU) are used, together with configuration data from the BioPlex design effort. Initial results focus on the importance of obtaining high plant productivity with a flight-like configuration. PMID:11540448
Subject-Specific Computational Modeling of Evoked Rabbit Phonation.
Chang, Siyuan; Novaleski, Carolyn K; Kojima, Tsuyoshi; Mizuta, Masanobu; Luo, Haoxiang; Rousseau, Bernard
2016-01-01
When developing high-fidelity computational model of vocal fold vibration for voice production of individuals, one would run into typical issues of unknown model parameters and model validation of individual-specific characteristics of phonation. In the current study, the evoked rabbit phonation is adopted to explore some of these issues. In particular, the mechanical properties of the rabbit's vocal fold tissue are unknown for individual subjects. In the model, we couple a 3D vocal fold model that is based on the magnetic resonance (MR) scan of the rabbit larynx and a simple one-dimensional (1D) model for the glottal airflow to perform fast simulations of the vocal fold dynamics. This hybrid three-dimensional (3D)/1D model is then used along with the experimental measurement of each individual subject for determination of the vocal fold properties. The vibration frequency and deformation amplitude from the final model are matched reasonably well for individual subjects. The modeling and validation approaches adopted here could be useful for future development of subject-specific computational models of vocal fold vibration. PMID:26592748
A Model for Intelligent Computer Assisted Language Instruction (MICALI).
ERIC Educational Resources Information Center
Farghaly, Ali
1989-01-01
States that Computer Assisted Language Instruction (CALI) software should be developed as an interactive natural language processing system. Describes artificial intelligence and proposes a model for intelligent CALI software (MICALI). Discusses MICALI's potential and current limitations due to the present state of the art. (Author/LS)
Mathematical and computer modeling of component surface shaping
NASA Astrophysics Data System (ADS)
Lyashkov, A.
2016-04-01
The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.
Implementing and Assessing Computational Modeling in Introductory Mechanics
ERIC Educational Resources Information Center
Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.
2012-01-01
Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational…
Numbers and Space: A Computational Model of the SNARC Effect
ERIC Educational Resources Information Center
Gevers, Wim; Verguts, Tom; Reynvoet, Bert; Caessens, Bernie; Fias, Wim
2006-01-01
The SNARC (spatial numerical associations of response codes) effect reflects the tendency to respond faster with the left hand to relatively small numbers and with the right hand to relatively large numbers (S. Dehaene, S. Bossini, & P. Giraux, 1993). Using computational modeling, the present article aims to provide a framework for conceptualizing…
Learning General Phonological Rules from Distributional Information: A Computational Model
ERIC Educational Resources Information Center
Calamaro, Shira; Jarosz, Gaja
2015-01-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. PMID:22823593
A Computational Model of Early Argument Structure Acquisition
ERIC Educational Resources Information Center
Alishahi, Afra; Stevenson, Suzanne
2008-01-01
How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning…
Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)
Not Available
2011-09-01
In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.
Strategy Generalization across Orientation Tasks: Testing a Computational Cognitive Model
ERIC Educational Resources Information Center
Gunzelmann, Glenn
2008-01-01
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human…
Design Model for Learner-Centered, Computer-Based Simulations.
ERIC Educational Resources Information Center
Hawley, Chandra L.; Duffy, Thomas M.
This paper presents a model for designing computer-based simulation environments within a constructivist framework for the K-12 school setting. The following primary criteria for the development of simulations are proposed: (1) the problem needs to be authentic; (2) the cognitive demand in learning should be authentic; (3) scaffolding supports a…
Scratch as a Computational Modelling Tool for Teaching Physics
ERIC Educational Resources Information Center
Lopez, Victor; Hernandez, Maria Isabel
2015-01-01
The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…
Molecular Modeling and Computational Chemistry at Humboldt State University.
ERIC Educational Resources Information Center
Paselk, Richard A.; Zoellner, Robert W.
2002-01-01
Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…
Computational Modelling and Simulation Fostering New Approaches in Learning Probability
ERIC Educational Resources Information Center
Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid
2006-01-01
Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…
A Computer Model of the Cardiovascular System for Effective Learning.
ERIC Educational Resources Information Center
Rothe, Carl F.
1979-01-01
Described is a physiological model which solves a set of interacting, possibly nonlinear, differential equations through numerical integration on a digital computer. Sample printouts are supplied and explained for effects on the components of a cardiovascular system when exercise, hemorrhage, and cardiac failure occur. (CS)
A Model for Intelligent Computer-Aided Education Systems.
ERIC Educational Resources Information Center
Du Plessis, Johan P.; And Others
1995-01-01
Proposes a model for intelligent computer-aided education systems that is based on cooperative learning, constructive problem-solving, object-oriented programming, interactive user interfaces, and expert system techniques. Future research is discussed, and a prototype for teaching mathematics to 10- to 12-year-old students is appended. (LRW)
Computational analysis of semi-span model test techniques
NASA Technical Reports Server (NTRS)
Milholen, William E., II; Chokani, Ndaona
1996-01-01
A computational investigation was conducted to support the development of a semi-span model test capability in the NASA LaRC's National Transonic Facility. This capability is required for the testing of high-lift systems at flight Reynolds numbers. A three-dimensional Navier-Stokes solver was used to compute the low-speed flow over both a full-span configuration and a semi-span configuration. The computational results were found to be in good agreement with the experimental data. The computational results indicate that the stand-off height has a strong influence on the flow over a semi-span model. The semi-span model adequately replicates the aerodynamic characteristics of the full-span configuration when a small stand-off height, approximately twice the tunnel empty sidewall boundary layer displacement thickness, is used. Several active sidewall boundary layer control techniques were examined including: upstream blowing, local jet blowing, and sidewall suction. Both upstream tangential blowing, and sidewall suction were found to minimize the separation of the sidewall boundary layer ahead of the semi-span model. The required mass flow rates are found to be practicable for testing in the NTF. For the configuration examined, the active sidewall boundary layer control techniques were found to be necessary only near the maximum lift conditions.
Computational Models of Relational Processes in Cognitive Development
ERIC Educational Resources Information Center
Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven
2012-01-01
Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…
An Empirical Generative Framework for Computational Modeling of Language Acquisition
ERIC Educational Resources Information Center
Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon
2010-01-01
This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…
Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes
Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.
2002-07-01
A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)
A Computational Model of Linguistic Humor in Puns
ERIC Educational Resources Information Center
Kao, Justine T.; Levy, Roger; Goodman, Noah D.
2016-01-01
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we…
NASA Astrophysics Data System (ADS)
Desceliers, C.; Soize, C.; Zarroug, M.
2013-08-01
The framework of this paper is the robust crash analysis of a motor vehicle. The crash analysis is carried out with an uncertain computational model for which uncertainties are taken into account with the parametric probabilistic approach and for which the stochastic solver is the Monte Carlo method. During the design process, different configurations of the motor vehicle are analyzed. Usual interpolation methods cannot be used to predict if the current configuration is similar or not to one of the previous configurations already analyzed and for which a complete stochastic computation has been carried out. In this paper, we propose a new indicator that allows to decide if the current configuration is similar to one of the previous analyzed configurations while the Monte Carlo simulation is not finished and therefore, to stop the Monte Carlo simulation before the end of computation.
Mathematical model partitioning and packing for parallel computer calculation
NASA Technical Reports Server (NTRS)
Arpasi, Dale J.; Milner, Edward J.
1986-01-01
This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.
A computer model of global thermospheric winds and temperatures
NASA Technical Reports Server (NTRS)
Killeen, T. L.; Roble, R. G.; Spencer, N. W.
1987-01-01
Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.
Computer Modeling of Saltstone Landfills by Intera Environmental Consultants
Albenesius, E.L.
2001-08-09
This report summaries the computer modeling studies and how the results of these studies were used to estimate contaminant releases to the groundwater. These modeling studies were used to improve saltstone landfill designs and are the basis for the current reference design. With the reference landfill design, EPA Drinking Water Standards can be met for all chemicals and radionuclides contained in Savannah River Plant waste salts.
Scattering Computations of Snow Aggregates from Simple Geometry Models
NASA Astrophysics Data System (ADS)
Liao, L.; Meneghini, R.; Nowell, H.; Liu, G.
2012-12-01
Accurately characterizing electromagnetic scattering from snow aggregates is one of the essential components in the development of algorithms for the GPM DPR and GMI. Recently several realistic aggregate models have been developed by using randomized procedures. Using pristine ice crystal habits found in nature as the basic elements of which the aggregates are made, more complex randomly aggregated structures can be formed to replicate snowflakes. For these particles, a numerical scheme is needed to compute the scattered fields. These computations, however, are usually time consuming, and are often limited to a certain range of particle sizes and to a few frequencies. The scattering results at other frequencies and sizes are then obtained by either interpolation or extrapolation from nearby computed points (anchor points). Because of the nonlinear nature of the scattering, particularly in the particle resonance region, this sometimes leads to severe errors if the number of anchor points is not sufficiently large to cover the spectral domain and particle size range. As an alternative to these complex models, the simple geometric models, such as sphere and spheroid, are useful for radar and radiometer applications if their scattering results can be shown to closely approximate those from complex aggregate structures. A great advantage of the simple models is their computational efficiency because of existence of analytical solutions, so that the computations can be easily extended to as many frequencies and particle sizes as desired. In this study, two simple models are tested. One approach is to use a snow mass density that is defined as the ratio of the mass of the snow aggregate to the volume, where the volume is taken to be that of a sphere with a diameter equal to the maximum measured dimension of the aggregate; i.e., the diameter of the circumscribing sphere. Because of the way in which the aggregates are generated, where a size-density relation is used, the
Computer model to simulate testing at the National Transonic Facility
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.
1995-01-01
A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.
Parallel Computation of the Regional Ocean Modeling System (ROMS)
Wang, P; Song, Y T; Chao, Y; Zhang, H
2005-04-05
The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.
A System Computational Model of Implicit Emotional Learning.
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898
Heart Modeling, Computational Physiology and the IUPS Physiome Project
NASA Astrophysics Data System (ADS)
Hunter, Peter J.
The Physiome Project of the International Union of Physiological Sciences (IUPS) is attempting to provide a comprehensive framework for modelling the human body using computational methods which can incorporate the biochemistry, biophysics and anatomy of cells, tissues and organs. A major goal of the project is to use computational modelling to analyse integrative biological function in terms of underlying structure and molecular mechanisms. To support that goal the project is developing XML markup languages (CellML & FieldML) for encoding models, and software tools for creating, visualizing and executing these models. It is also establishing web-accessible physiological databases dealing with model-related data at the cell, tissue, organ and organ system levels. Two major developments in current medicine are, on the one hand, the much publicised genomics (and soon proteomics) revolution and, on the other, the revolution in medical imaging in which the physiological function of the human body can be studied with a plethora of imaging devices such as MRI, CT, PET, ultrasound, electrical mapping, etc. The challenge for the Physiome Project is to link these two developments for an individual - to use complementary genomic and medical imaging data, together with computational modelling tailored to the anatomy, physiology and genetics of that individual, for patient-specific diagnosis and treatment.
A System Computational Model of Implicit Emotional Learning
Puviani, Luca; Rama, Sidita
2016-01-01
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898
Frequency response modeling and control of flexible structures: Computational methods
NASA Technical Reports Server (NTRS)
Bennett, William H.
1989-01-01
The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.
Computational oncology - mathematical modelling of drug regimens for precision medicine.
Barbolosi, Dominique; Ciccolini, Joseph; Lacarelle, Bruno; Barlési, Fabrice; André, Nicolas
2016-04-01
Computational oncology is a generic term that encompasses any form of computer-based modelling relating to tumour biology and cancer therapy. Mathematical modelling can be used to probe the pharmacokinetics and pharmacodynamics relationships of the available anticancer agents in order to improve treatment. As a result of the ever-growing numbers of druggable molecular targets and possible drug combinations, obtaining an optimal toxicity-efficacy balance is an increasingly complex task. Consequently, standard empirical approaches to optimizing drug dosing and scheduling in patients are now of limited utility; mathematical modelling can substantially advance this practice through improved rationalization of therapeutic strategies. The implementation of mathematical modelling tools is an emerging trend, but remains largely insufficient to meet clinical needs; at the bedside, anticancer drugs continue to be prescribed and administered according to standard schedules. To shift the therapeutic paradigm towards personalized care, precision medicine in oncology requires powerful new resources for both researchers and clinicians. Mathematical modelling is an attractive approach that could help to refine treatment modalities at all phases of research and development, and in routine patient care. Reviewing preclinical and clinical examples, we highlight the current achievements and limitations with regard to computational modelling of drug regimens, and discuss the potential future implementation of this strategy to achieve precision medicine in oncology. PMID:26598946
ERIC Educational Resources Information Center
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2007-01-01
At least 3 different types of computational model have been shown to account for various facets of both normal and impaired single word reading: (a) the connectionist triangle model, (b) the dual-route cascaded model, and (c) the connectionist dual process model. Major strengths and weaknesses of these models are identified. In the spirit of…
Flexing computational muscle: modeling and simulation of musculotendon dynamics.
Millard, Matthew; Uchida, Thomas; Seth, Ajay; Delp, Scott L
2013-02-01
Muscle-driven simulations of human and animal motion are widely used to complement physical experiments for studying movement dynamics. Musculotendon models are an essential component of muscle-driven simulations, yet neither the computational speed nor the biological accuracy of the simulated forces has been adequately evaluated. Here we compare the speed and accuracy of three musculotendon models: two with an elastic tendon (an equilibrium model and a damped equilibrium model) and one with a rigid tendon. Our simulation benchmarks demonstrate that the equilibrium and damped equilibrium models produce similar force profiles but have different computational speeds. At low activation, the damped equilibrium model is 29 times faster than the equilibrium model when using an explicit integrator and 3 times faster when using an implicit integrator; at high activation, the two models have similar simulation speeds. In the special case of simulating a muscle with a short tendon, the rigid-tendon model produces forces that match those generated by the elastic-tendon models, but simulates 2-54 times faster when an explicit integrator is used and 6-31 times faster when an implicit integrator is used. The equilibrium, damped equilibrium, and rigid-tendon models reproduce forces generated by maximally-activated biological muscle with mean absolute errors less than 8.9%, 8.9%, and 20.9% of the maximum isometric muscle force, respectively. When compared to forces generated by submaximally-activated biological muscle, the forces produced by the equilibrium, damped equilibrium, and rigid-tendon models have mean absolute errors less than 16.2%, 16.4%, and 18.5%, respectively. To encourage further development of musculotendon models, we provide implementations of each of these models in OpenSim version 3.1 and benchmark data online, enabling others to reproduce our results and test their models of musculotendon dynamics. PMID:23445050
ERIC Educational Resources Information Center
Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.
2010-01-01
As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…
Computational Modeling of Semiconductor Dynamics at Femtosecond Time Scales
NASA Technical Reports Server (NTRS)
Agrawal, Govind P.; Goorjian, Peter M.
1998-01-01
The Interchange No. NCC2-5149 deals with the emerging technology of photonic (or optoelectronic) integrated circuits (PICs or OEICs). In PICs, optical and electronic components are grown together on the same chip. To build such devices and subsystems, one needs to model the entire chip. PICs are useful for building components for integrated optical transmitters, integrated optical receivers, optical data storage systems, optical interconnects, and optical computers. For example, the current commercial rate for optical data transmission is 2.5 gigabits per second, whereas the use of shorter pulses to improve optical transmission rates would yield an increase of 400 to 1000 times. The improved optical data transmitters would be used in telecommunications networks and computer local-area networks. Also, these components can be applied to activities in space, such as satellite to satellite communications, when the data transmissions are made at optical frequencies. The research project consisted of developing accurate computer modeling of electromagnetic wave propagation in semiconductors. Such modeling is necessary for the successful development of PICs. More specifically, these computer codes would enable the modeling of such devices, including their subsystems, such as semiconductor lasers and semiconductor amplifiers in which there is femtosecond pulse propagation. Presently, there are no computer codes that could provide this modeling. Current codes do not solve the full vector, nonlinear, Maxwell's equations, which are required for these short pulses and also current codes do not solve the semiconductor Bloch equations, which are required to accurately describe the material's interaction with femtosecond pulses. The research performed under NCC2-5149 solves the combined Maxwell's and Bloch's equations.
Quantum computer simulation using the CUDA programming model
NASA Astrophysics Data System (ADS)
Gutiérrez, Eladio; Romero, Sergio; Trenas, María A.; Zapata, Emilio L.
2010-02-01
Quantum computing emerges as a field that captures a great theoretical interest. Its simulation represents a problem with high memory and computational requirements which makes advisable the use of parallel platforms. In this work we deal with the simulation of an ideal quantum computer on the Compute Unified Device Architecture (CUDA), as such a problem can benefit from the high computational capacities of Graphics Processing Units (GPU). After all, modern GPUs are becoming very powerful computational architectures which is causing a growing interest in their application for general purpose. CUDA provides an execution model oriented towards a more general exploitation of the GPU allowing to use it as a massively parallel SIMT (Single-Instruction Multiple-Thread) multiprocessor. A simulator that takes into account memory reference locality issues is proposed, showing that the challenge of achieving a high performance depends strongly on the explicit exploitation of memory hierarchy. Several strategies have been experimentally evaluated obtaining good performance results in comparison with conventional platforms.
Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.
Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel
2016-08-01
Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery. PMID:26715210
Material parameter computation for multi-layered vocal fold models
Schmidt, Bastian; Stingl, Michael; Leugering, Günter; Berry, David A.; Döllinger, Michael
2011-01-01
Today, the prevention and treatment of voice disorders is an ever-increasing health concern. Since many occupations rely on verbal communication, vocal health is necessary just to maintain one’s livelihood. Commonly applied models to study vocal fold vibrations and air flow distributions are self sustained physical models of the larynx composed of artificial silicone vocal folds. Choosing appropriate mechanical parameters for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In the present work, a combination of experimental and numerical approaches to compute material parameters for synthetic vocal fold models is presented. The material parameters are derived from deformation behaviors of excised human larynges. The resulting deformations are used as reference displacements for a tracking functional to be optimized. Material optimization was applied to three-dimensional vocal fold models based on isotropic and transverse-isotropic material laws, considering both a layered model with homogeneous material properties on each layer and an inhomogeneous model. The best results exhibited a transversal-isotropic inhomogeneous (i.e., not producible) model. For the homogeneous model (three layers), the transversal-isotropic material parameters were also computed for each layer yielding deformations similar to the measured human vocal fold deformations. PMID:21476672
Regional Computation of TEC Using a Neural Network Model
NASA Astrophysics Data System (ADS)
Leandro, R. F.; Santos, M. C.
2004-05-01
One of the main sources of errors of GPS measurements is the ionosphere refraction. As a dispersive medium, the ionosphere allow its influence to be computed by using dual frequency receivers. In the case of single frequency receivers it is necessary to use models that tell us how big the ionospheric refraction is. The GPS broadcast message carries parameters of this model, namely Klobuchar model. Dual frequency receivers allow to estimate the influence of ionosphere in the GPS signal by the computation of TEC (Total Electron Content) values, that have a direct relationship with the magnitude of the delay caused by the ionosphere. One alternative is to create a regional model based on a network of dual frequency receivers. In this case, the regional behaviour of ionosphere is modelled in a way that it is possible to estimate the TEC values into or near this region. This regional model can be based on polynomials, for example. In this work we will present a Neural Network-based model to the regional computation of TEC. The advantage of using a Neural Network is that it is not necessary to have a great knowledge on the behaviour of the modelled surface due to the adaptation capability of neural networks training process, that is an iterative adjust of the synaptic weights in function of residuals, using the training parameters. Therefore, the previous knowledge of the modelled phenomena is important to define what kind of and how many parameters are needed to train the neural network so that reasonable results are obtained from the estimations. We have used data from the GPS tracking network in Brazil, and we have tested the accuracy of the new model to all locations where there is a station, accessing the efficiency of the model everywhere. TEC values were computed for each station of the network. After that the training parameters data set for the test station was formed, with the TEC values of all others (all stations, except the test one). The Neural Network was
Applying computer simulation models as learning tools in fishery management
Johnson, B.L.
1995-01-01
Computer models can be powerful tools for addressing many problems in fishery management, but uncertainty about how to apply models and how they should perform can lead to a cautious approach to modeling. Within this approach, we expect models to make quantitative predictions but only after all model inputs have been estimated from empirical data and after the model has been tested for agreement with an independent data set. I review the limitations to this approach and show how models can be more useful as tools for organizing data and concepts, learning about the system to be managed, and exploring management options. Fishery management requires deciding what actions to pursue to meet management objectives. Models do not make decisions for us but can provide valuable input to the decision-making process. When empirical data are lacking, preliminary modeling with parameters derived from other sources can help determine priorities for data collection. When evaluating models for management applications, we should attempt to define the conditions under which the model is a useful, analytical tool (its domain of applicability) and should focus on the decisions made using modeling results, rather than on quantitative model predictions. I describe an example of modeling used as a learning tool for the yellow perch Perca flavescens fishery in Green Bay, Lake Michigan.
Computational Modeling of Photonic Crystal Microcavity Single-Photon Emitters
NASA Astrophysics Data System (ADS)
Saulnier, Nicole A.
Conventional cryptography is based on algorithms that are mathematically complex and difficult to solve, such as factoring large numbers. The advent of a quantum computer would render these schemes useless. As scientists work to develop a quantum computer, cryptographers are developing new schemes for unconditionally secure cryptography. Quantum key distribution has emerged as one of the potential replacements of classical cryptography. It relics on the fact that measurement of a quantum bit changes the state of the bit and undetected eavesdropping is impossible. Single polarized photons can be used as the quantum bits, such that a quantum system would in some ways mirror the classical communication scheme. The quantum key distribution system would include components that create, transmit and detect single polarized photons. The focus of this work is on the development of an efficient single-photon source. This source is comprised of a single quantum dot inside of a photonic crystal microcavity. To better understand the physics behind the device, a computational model is developed. The model uses Finite-Difference Time-Domain methods to analyze the electromagnetic field distribution in photonic crystal microcavities. It uses an 8-band k · p perturbation theory to compute the energy band structure of the epitaxially grown quantum dots. We discuss a method that combines the results of these two calculations for determining the spontaneous emission lifetime of a quantum dot in bulk material or in a microcavity. The computational models developed in this thesis are used to identify and characterize microcavities for potential use in a single-photon source. The computational tools developed are also used to investigate novel photonic crystal microcavities that incorporate 1D distributed Bragg reflectors for vertical confinement. It is found that the spontaneous emission enhancement in the quasi-3D cavities can be significantly greater than in traditional suspended slab
Recent and planned changes to the LHCb computing model
NASA Astrophysics Data System (ADS)
Cattaneo, M.; Charpentier, P.; Clarke, P.; Roiser, S.
2014-06-01
The LHCb experiment [1] has taken data between December 2009 and February 2013. The data taking conditions and trigger rate were adjusted several times during this period to make optimal use of the luminosity delivered by the LHC and to extend the physics potential of the experiment. By 2012, LHCb was taking data at twice the instantaneous luminosity and 2.5 times the high level trigger rate than originally foreseen. This represents a considerable increase in the amount of data which had to be handled compared to the original Computing Model from 2005, both in terms of compute power and in terms of storage. In this paper we describe the changes that have taken place in the LHCb computing model during the last 2 years of data taking to process and analyse the increased data rates within limited computing resources. In particular a quite original change was introduced at the end of 2011 when LHCb started to use for reprocessing compute power that was not co-located with the RAW data, namely using Tier2 sites and private resources. The flexibility of the LHCbDirac Grid interware allowed easy inclusion of these additional resources that in 2012 provided 45% of the compute power for the end-of-year reprocessing. Several changes were also implemented in the Data Management model in order to limit the need for accessing data from tape, as well as in the data placement policy in order to cope with a large imbalance in storage resources at Tier1 sites. We also discuss changes that are being implemented during the LHC Long Shutdown 1 (LS1) to prepare for a further doubling of the data rate when the LHC restarts at a higher energy in 2015.
Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2010-12-01
Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between
Icing simulation: A survey of computer models and experimental facilities
NASA Technical Reports Server (NTRS)
Potapczuk, M. G.; Reinmann, J. J.
1991-01-01
A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.
A tool for modeling concurrent real-time computation
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.
1990-01-01
Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.
Computational Science Research in Support of Petascale Electromagnetic Modeling
Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC
2008-06-20
Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.
Human performance models for computer-aided engineering
NASA Technical Reports Server (NTRS)
Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)
1989-01-01
This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.
Programs For Modeling Fault-Tolerant Computing Systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1991-01-01
Pade Approximation with Scaling, (PAWS) and Scaling Taylor Exponential Matrix (STEM) computer programs are software tools for design and validation. Provide flexible, user-friendly, language-based interface for input of Markov mathematical methods describing behaviors of fault-tolerant computer systems. Markov models include both recovery from faults via reconfiguration and behaviors of such systems when faults occur. PAWS and STEM produce exact solutions of probability of system failure and provide conservative estimate of number of significant digits in solution. Written in PASCAL and FORTRAN.
The Next Generation ARC Middleware and ATLAS Computing Model
NASA Astrophysics Data System (ADS)
Filipčič, Andrej; Cameron, David; Smirnova, Oxana; Konstantinov, Aleksandr; Karpenko, Dmytro
2012-12-01
The distributed NDGF Tier-1 and associated NorduGrid clusters are well integrated into the ATLAS computing environment but follow a slightly different paradigm than other ATLAS resources. The current paradigm does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS’ global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new services for job control and data transfer. Integration of the ARC core into the EMI middleware provides a natural way to implement the new services using the ARC components
Computational modeling of cardiac hemodynamics: Current status and future outlook
NASA Astrophysics Data System (ADS)
Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.
2016-01-01
The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.
Modeling thermal skin burns on a personal computer.
Diller, K R
1998-01-01
Mathematic models have been used for several decades to predict the severity of burn injury that would result from application of a given thermal stress to the surface of the skin. Solution of the governing mathematic equations has been achieved either by analytic methods, with required simplifying assumptions that may compromise the rigor with which the results are applied, or by numeric methods, which require programming of finite element or finite difference codes in computer languages. In recent years microcomputer hardware and the associated software have become both powerful and relatively simple to use, and the price per unit of computing capability has dropped dramatically. Thus it is now possible to perform on a desktop machine with relative case calculations that previously might have been prohibitively complex or expensive. Modeling of burn injuries fits into this category. This article presents a straightforward method for implementing a finite difference solution to the burn process through the combination of a Macintosh personal computer and a widely used spreadsheet software program; this hardware and software combination has been used widely for a broad spectrum of general computing activities. This article presents a model for a surface thermal burn, as implemented for solution on a spreadsheet, with example runs to illustrate and verify the method. PMID:9789178
Data visualization optimization via computational modeling of perception.
Pineo, Daniel; Ware, Colin
2012-02-01
We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods. PMID:21383402
Images as drivers of progress in cardiac computational modelling
Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A.; Bishop, Martin J.; Schneider, Jürgen E.; Kohl, Peter; Grau, Vicente
2014-01-01
Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved. PMID:25117497
Phantoms and computational models in therapy, diagnosis and protection
Not Available
1992-01-01
The development of realistic body phantoms and computational models is strongly dependent on the availability of comprehensive human anatomical data. This information is often missing, incomplete or not easily available. Therefore, emphasis is given in the Report to organ and body masses and geometries. The influence of age, sex and ethnic origins in human anatomy is considered. Suggestions are given on how suitable anatomical data can be either extracted from published information or obtained from measurements on the local population. Existing types of phantoms and computational models used with photons, electrons, protons and neutrons are reviewed in this Report. Specifications of those considered important to the maintenance and development of reliable radiation dosimetry and measurement are given. The information provided includes a description of the phantom or model, together with diagrams or photographs and physical dimensions. The tissues within body sections are identified and the tissue substitutes used or recommended are listed. The uses of the phantom or model in radiation dosimetry and measurement are outlined. The Report deals predominantly with phantom and computational models representing the human anatomy, with a short Section devoted to animal phantoms in radiobiology.
NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling
NASA Technical Reports Server (NTRS)
Rumsey, C. L.; Lee-Rausch, E. M.
2012-01-01
Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.
Solving strongly correlated electron models on a quantum computer
NASA Astrophysics Data System (ADS)
Wecker, Dave; Hastings, Matthew B.; Wiebe, Nathan; Clark, Bryan K.; Nayak, Chetan; Troyer, Matthias
2015-12-01
One of the main applications of future quantum computers will be the simulation of quantum models. While the evolution of a quantum state under a Hamiltonian is straightforward (if sometimes expensive), using quantum computers to determine the ground-state phase diagram of a quantum model and the properties of its phases is more involved. Using the Hubbard model as a prototypical example, we here show all the steps necessary to determine its phase diagram and ground-state properties on a quantum computer. In particular, we discuss strategies for efficiently determining and preparing the ground state of the Hubbard model starting from various mean-field states with broken symmetry. We present an efficient procedure to prepare arbitrary Slater determinants as initial states and present the complete set of quantum circuits needed to evolve from these to the ground state of the Hubbard model. We show that, using efficient nesting of the various terms, each time step in the evolution can be performed with just O (N ) gates and O (logN ) circuit depth. We give explicit circuits to measure arbitrary local observables and static and dynamic correlation functions, in both the time and the frequency domains. We further present efficient nondestructive approaches to measurement that avoid the need to reprepare the ground state after each measurement and that quadratically reduce the measurement error.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
VLSI circuits implementing computational models of neocortical circuits.
Wijekoon, Jayawan H B; Dudek, Piotr
2012-09-15
This paper overviews the design and implementation of three neuromorphic integrated circuits developed for the COLAMN ("Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex") project. The circuits are implemented in a standard 0.35 μm CMOS technology and include spiking and bursting neuron models, and synapses with short-term (facilitating/depressing) and long-term (STDP and dopamine-modulated STDP) dynamics. They enable execution of complex nonlinear models in accelerated-time, as compared with biology, and with low power consumption. The neural dynamics are implemented using analogue circuit techniques, with digital asynchronous event-based input and output. The circuits provide configurable hardware blocks that can be used to simulate a variety of neural networks. The paper presents experimental results obtained from the fabricated devices, and discusses the advantages and disadvantages of the analogue circuit approach to computational neural modelling. PMID:22342970
Computational Modeling of Liquid and Gaseous Control Valves
NASA Technical Reports Server (NTRS)
Daines, Russell; Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Moore, Arden; Sulyma, Peter
2005-01-01
In this paper computational modeling efforts undertaken at NASA Stennis Space Center in support of rocket engine component testing are discussed. Such analyses include structurally complex cryogenic liquid valves and gas valves operating at high pressures and flow rates. Basic modeling and initial successes are documented, and other issues that make valve modeling at SSC somewhat unique are also addressed. These include transient behavior, valve stall, and the determination of flow patterns in LOX valves. Hexahedral structured grids are used for valves that can be simplifies through the use of axisymmetric approximation. Hybrid unstructured methodology is used for structurally complex valves that have disparate length scales and complex flow paths that include strong swirl, local recirculation zones/secondary flow effects. Hexahedral (structured), unstructured, and hybrid meshes are compared for accuracy and computational efficiency. Accuracy is determined using verification and validation techniques.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Benchmarking computational fluid dynamics models for lava flow simulation
NASA Astrophysics Data System (ADS)
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi
2016-04-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.
flankr: An R package implementing computational models of attentional selectivity.
Grange, James A
2016-06-01
The Eriksen flanker task (Eriksen and Eriksen, Perception & Psychophysics, 16, 143-149, 1974) is a classic test in cognitive psychology of visual selective attention. Two recent computational models have formalised the dynamics of the apparent increasing attentional selectivity during stimulus processing, but with very different theoretical underpinnings: The shrinking spotlight (SSP) model (White et al., Cognitive Psychology, 210-238, 2011) assumes attentional selectivity improves in a gradual, continuous manner; the dual stage two phase (DSTP) model (Hübner et al., Psychological Review, 759-784, 2010) assumes attentional selectivity changes from a low- to a high-mode of selectivity at a discrete time-point. This paper presents an R package-flankr-that instantiates both computational models. flankr allows the user to simulate data from both models, and to fit each model to human data. flankr provides statistics of the goodness-of-fit to human data, allowing users to engage in competitive model comparison of the DSTP and the SSP models on their own data. It is hoped that the utility of flankr lies in allowing more researchers to engage in the important issue of the dynamics of attentional selectivity. PMID:26174713
A New Perspective for the Calibration of Computational Predictor Models.
Crespo, Luis Guillermo
2014-11-01
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Digital characterization and preliminary computer modeling of hydrocarbon bearing sandstone
NASA Astrophysics Data System (ADS)
Latief, Fourier Dzar Eljabbar; Haq, Tedy Muslim
2014-03-01
With the advancement of three dimensional imaging technologies, especially the μCT scanning systems, we have been able to obtain three-dimensional digital representation of porous rocks in the scale of micrometers. Characterization was then also possible to conduct using computational approach. Hydrocarbon bearing sandstone has become one of interesting objects to analyze in the last decade. In this research, we performed digital characterization of hydrocarbon bearing sandstone reservoir from Sumatra. The sample was digitized using a μCT scanner (Skyscan 1173) which produced series of reconstructed images with the spatial resolution of 15 μm. Using computational approaches, i.e., image processing, image analysis, and simulation of fluid flow inside the rock using Lattice Boltzmann Method, we have been able to obtain the porosity of the sandstone, which is 23.89%, and the permeability, which is 9382 mD. Based on visual inspection, the porosity value, along with the calculated specific surface area, we produce a preliminary computer model of the rock using grain based method. This method employs a reconstruction of grains using the non-spherical model, and a purely random deposition of the grains in a virtual three dimensional cube with the size of 300 × 300 × 300. The model has porosity of 23.96%, and the permeability is 7215 mD. While the error of the porosity is very small (which is only 0.3%), the permeability has error of around 23% from the real sample which is considered very significant. This suggests that the modeling based on porosity and specific surface area is not satisfactory to produce a representative model. However, this work has been a good example of how characterization and modeling of porous rock can be conducted using a non-destructive computational approach.
Modeling the complete Otto cycle: Preliminary version. [computer programming
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.; Mcbride, B. J.
1977-01-01
A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.
Group theory and biomolecular conformation: I. Mathematical and computational models
Chirikjian, Gregory S
2010-01-01
Biological macromolecules, and the complexes that they form, can be described in a variety of ways ranging from quantum mechanical and atomic chemical models, to coarser grained models of secondary structure and domains, to continuum models. At each of these levels, group theory can be used to describe both geometric symmetries and conformational motion. In this survey, a detailed account is provided of how group theory has been applied across computational structural biology to analyze the conformational shape and motion of macromolecules and complexes. PMID:20827378
Fitting Computational Models to fMRI Data
Ashby, F. Gregory; Waldschmidt, Jennifer G.
2008-01-01
Many computational models in psychology predict how neural activation in specific brain regions should change during certain cognitive tasks. The emergence of fMRI as a research tool provides an ideal vehicle to test these predictions. Before such tests are possible, however, significant methodological problems must be solved. These problems include transforming the neural activations predicted by the model into predicted BOLD responses, identifying the voxels within each region of interest against which to test the model, and comparing the observed and predicted BOLD responses in each of these regions. Methods are described for solving each of these problems. PMID:18697666
A computational study of discrete mechanical tissue models.
Pathmanathan, P; Cooper, J; Fletcher, A; Mirams, G; Murray, P; Osborne, J; Pitt-Francis, J; Walter, A; Chapman, S J
2009-01-01
A computational study of discrete 'cell-centre' approaches to modelling the evolution of a collection of cells is undertaken. The study focuses on the mechanical aspects of the tissue, in order to separate the passive mechanical response of the model from active effects such as cell-growth and cell division. Issues which arise when implementing these models are described, and a series of numerical mechanical experiments is performed. It is shown that discrete tissues modelled this way typically exhibit elastic-plastic behaviour under slow compression, and act as a brittle linear elastic solid under slow tension. Both overlapping spheres and Voronoi-tessellation-based models are examined, and the effect of different cell-cell interaction force laws on the bulk mechanical properties of the tissue is determined. This correspondence allows parameters in the cell-based model to be chosen to be compatible with bulk tissue measurements. PMID:19369704
A Computational and Mathematical Model for Device Induced Thrombosis
NASA Astrophysics Data System (ADS)
Wu, Wei-Tao; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James
2015-11-01
Based on the Sorenson's model of thrombus formation, a new mathematical model describing the process of thrombus growth is developed. In this model the blood is treated as a Newtonian fluid, and the transport and reactions of the chemical and biological species are modeled using CRD (convection-reaction-diffusion) equations. A computational fluid dynamic (CFD) solver for the mathematical model is developed using the libraries of OpenFOAM. Applying the CFD solver, several representative benchmark problems are studied: rapid thrombus growth in vivo by injecting Adenosine diphosphate (ADP) using iontophoretic method and thrombus growth in rectangular microchannel with a crevice which usually appears as a joint between components of devices and often becomes nidus of thrombosis. Very good agreements between the numerical and the experimental results validate the model and indicate its potential to study a host of complex and practical problems in the future, such as thrombosis in blood pumps and artificial lungs.
KU-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1980-01-01
The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.
A Framework for Understanding Physics Students' Computational Modeling Practices
NASA Astrophysics Data System (ADS)
Lunk, Brandon Robert
With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J.; Marcy, Peter W.
2012-06-07
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Human operator identification model and related computer programs
NASA Technical Reports Server (NTRS)
Kessler, K. M.; Mohr, J. N.
1978-01-01
Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.
Computational Tools To Model Halogen Bonds in Medicinal Chemistry.
Ford, Melissa Coates; Ho, P Shing
2016-03-10
The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry. PMID:26465079
Uses of Computer Simulation Models in Ag-Research and Everyday Life
Technology Transfer Automated Retrieval System (TEKTRAN)
When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...
COMPUTER MODEL AND SIMULATION OF A GLOVE BOX PROCESS
C. FOSTER; ET AL
2001-01-01
The development of facilities to deal with the disposition of nuclear materials at an acceptable level of Occupational Radiation Exposure (ORE) is a significant issue facing the nuclear community. One solution is to minimize the worker's exposure though the use of automated systems. However, the adoption of automated systems for these tasks is hampered by the challenging requirements that these systems must meet in order to be cost effective solutions in the hazardous nuclear materials processing environment. Retrofitting current glove box technologies with automation systems represents potential near-term technology that can be applied to reduce worker ORE associated with work in nuclear materials processing facilities. Successful deployment of automation systems for these applications requires the development of testing and deployment strategies to ensure the highest level of safety and effectiveness. Historically, safety tests are conducted with glove box mock-ups around the finished design. This late detection of problems leads to expensive redesigns and costly deployment delays. With wide spread availability of computers and cost effective simulation software it is possible to discover and fix problems early in the design stages. Computer simulators can easily create a complete model of the system allowing a safe medium for testing potential failures and design shortcomings. The majority of design specification is now done on computer and moving that information to a model is relatively straightforward. With a complete model and results from a Failure Mode Effect Analysis (FMEA), redesigns can be worked early. Additional issues such as user accessibility, component replacement, and alignment problems can be tackled early in the virtual environment provided by computer simulation. In this case, a commercial simulation package is used to simulate a lathe process operation at the Los Alamos National Laboratory (LANL). The Lathe process operation is indicative of
Lack of confidence in approximate Bayesian computation model choice.
Robert, Christian P; Cornuet, Jean-Marie; Marin, Jean-Michel; Pillai, Natesh S
2011-09-13
Approximate Bayesian computation (ABC) have become an essential tool for the analysis of complex stochastic models. Grelaud et al. [(2009) Bayesian Anal 3:427-442] advocated the use of ABC for model choice in the specific case of Gibbs random fields, relying on an intermodel sufficiency property to show that the approximation was legitimate. We implemented ABC model choice in a wide range of phylogenetic models in the Do It Yourself-ABC (DIY-ABC) software [Cornuet et al. (2008) Bioinformatics 24:2713-2719]. We now present arguments as to why the theoretical arguments for ABC model choice are missing, because the algorithm involves an unknown loss of information induced by the use of insufficient summary statistics. The approximation error of the posterior probabilities of the models under comparison may thus be unrelated with the computational effort spent in running an ABC algorithm. We then conclude that additional empirical verifications of the performances of the ABC procedure as those available in DIY-ABC are necessary to conduct model choice. PMID:21876135
Toward diagnostic model calibration and evaluation: Approximate Bayesian computation
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Sadegh, Mojtaba
2013-07-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.
Modeling the behavior of the computer-assisted instruction user
Stoddard, M.L.
1983-01-01
The field of computer-assisted instruction CAI contains abundant studies on effectiveness of particular programs or systems. However, the nature of the field is such that the computer is the focus of research, not the users. Few research studies have focused on the behavior of the individual CAI user. Morgan (1981) stated that descriptive studies are needed to clarify what the important phenomena of user behavior are. The need for such studies is particularly acute in computer-assisted instruction. Building a behavioral model would enable us to understand problem-solving strategies and rules applied by the user during a CAI experience. Also, courseware developers could use this information to design tutoring systems that are more responsive to individual differences than our present CAI is. This paper proposes a naturalistic model for evaluating both affective and cognitive characteristics of the CAI user. It begins with a discussion of features of user behavior, followed by a description of evaluation methodology that can lead to modeling user behavior. The paper concludes with a discussion of how implementation of this model can contribute to the fields of CAI and cognitive psychology.
In vivo porcine left atrial wall stress: Computational model.
Di Martino, Elena S; Bellini, Chiara; Schwartzman, David S
2011-10-13
Most computational models of the heart have so far concentrated on the study of the left ventricle, mainly using simplified geometries. The same approach cannot be adopted to model the left atrium, whose irregular shape does not allow morphological simplifications. In addition, the deformation of the left atrium during the cardiac cycle strongly depends on the interaction with its surrounding structures. We present a procedure to generate a comprehensive computational model of the left atrium, including physiological loads (blood pressure), boundary conditions (pericardium, pulmonary veins and mitral valve annulus movement) and mechanical properties based on planar biaxial experiments. The model was able to accurately reproduce the in vivo dynamics of the left atrium during the passive portion of the cardiac cycle. A shift in time between the peak pressure and the maximum displacement of the mitral valve annulus allows the appendage to inflate and bend towards the ventricle before the pulling effect associated with the ventricle contraction takes place. The ventricular systole creates room for further expansion of the appendage, which gets in close contact with the pericardium. The temporal evolution of the volume in the atrial cavity as predicted by the finite element simulation matches the volume changes obtained from CT scans. The stress field computed at each time point shows remarkable spatial heterogeneity. In particular, high stress concentration occurs along the appendage rim and in the region surrounding the pulmonary veins. PMID:21907340
Computational Systems Biology in Cancer: Modeling Methods and Applications
Materi, Wayne; Wishart, David S.
2007-01-01
In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081
The Importance of Formalizing Computational Models of Face Adaptation Aftereffects
Ross, David A.; Palmeri, Thomas J.
2016-01-01
Face adaptation is widely used as a means to probe the neural representations that support face recognition. While the theories that relate face adaptation to behavioral aftereffects may seem conceptually simple, our work has shown that testing computational instantiations of these theories can lead to unexpected results. Instantiating a model of face adaptation not only requires specifying how faces are represented and how adaptation shapes those representations but also specifying how decisions are made, translating hidden representational states into observed responses. Considering the high-dimensionality of face representations, the parallel activation of multiple representations, and the non-linearity of activation functions and decision mechanisms, intuitions alone are unlikely to succeed. If the goal is to understand mechanism, not simply to examine the boundaries of a behavioral phenomenon or correlate behavior with brain activity, then formal computational modeling must be a component of theory testing. To illustrate, we highlight our recent computational modeling of face adaptation aftereffects and discuss how models can be used to understand the mechanisms by which faces are recognized. PMID:27378960
The origins of computer weather prediction and climate modeling
Lynch, Peter
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
The origins of computer weather prediction and climate modeling
NASA Astrophysics Data System (ADS)
Lynch, Peter
2008-03-01
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Computational Modeling of Flow-Altering Surgeries in Basilar Aneurysms
Rayz, V. L.; Abla, A.; Boussel, L.; Leach, J. R.; Acevedo-Bolton, G.; Saloner, D.; Lawton, M. T.
2014-01-01
In cases where surgeons consider different interventional options for flow alterations in the setting of pathological basilar artery hemodynamics, a virtual model demonstrating the flow fields resulting from each of these options can assist in making clinical decisions. In this study, image-based computational fluid dynamics (CFD) models were used to simulate the flow in four basilar artery aneurysms in order to evaluate postoperative hemodynamics that would result from flow-altering interventions. Patient-specific geometries were constructed using MR angiography and velocimetry data. CFD simulations carried out for the preoperative flow conditions were compared to in vivo phase-contrast MRI measurements (4DFlowMRI) acquired prior to the interventions. The models were then modified according to the procedures considered for each patient. Numerical simulations of the flow and virtual contrast transport were carried out in each case in order to assess postoperative flow fields and estimate the likelihood of intra-aneurysmal thrombus deposition following the procedures. Postoperative imaging data, when available, were used to validate computational predictions. In two cases, where the aneurysms involved vital pontine perforator arteries branching from the basilar artery, idealized geometries of these vessels were incorporated into the CFD models. The effect of interventions on the flow through the perforators was evaluated by simulating the transport of contrast in these vessels. The computational results were in close agreement with the MR imaging data. In some cases, CFD simulations could help determine which of the surgical options was likely to reduce the flow into the aneurysm while preserving the flow through the basilar trunk. The study demonstrated that image-based computational modeling can provide guidance to clinicians by indicating possible outcome complications and indicating expected success potential for ameliorating pathological aneurysmal flow, prior
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
MMA, A Computer Code for Multi-Model Analysis
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
Model Selection in Historical Research Using Approximate Bayesian Computation
Rubio-Campillo, Xavier
2016-01-01
Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953
MaRIE theory, modeling and computation roadmap executive summary
Lookman, Turab
2010-01-01
The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road map to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.
A Sensorimotor Model for Computing Intended Reach Trajectories.
Üstün, Cevat
2016-03-01
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. PMID:26985662
A Sensorimotor Model for Computing Intended Reach Trajectories
Üstün, Cevat
2016-01-01
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. PMID:26985662
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
NASA Astrophysics Data System (ADS)
Pallant, Amy; Lee, Hee-Sun
2015-04-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.
Computational Modeling as a Design Tool in Microelectronics Manufacturing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Plans to introduce pilot lines or fabs for 300 mm processing are in progress. The IC technology is simultaneously moving towards 0.25/0.18 micron. The convergence of these two trends places unprecedented stringent demands on processes and equipments. More than ever, computational modeling is called upon to play a complementary role in equipment and process design. The pace in hardware/process development needs a matching pace in software development: an aggressive move towards developing "virtual reactors" is desirable and essential to reduce design cycle and costs. This goal has three elements: reactor scale model, feature level model, and database of physical/chemical properties. With these elements coupled, the complete model should function as a design aid in a CAD environment. This talk would aim at the description of various elements. At the reactor level, continuum, DSMC(or particle) and hybrid models will be discussed and compared using examples of plasma and thermal process simulations. In microtopography evolution, approaches such as level set methods compete with conventional geometric models. Regardless of the approach, the reliance on empricism is to be eliminated through coupling to reactor model and computational surface science. This coupling poses challenging issues of orders of magnitude variation in length and time scales. Finally, database development has fallen behind; current situation is rapidly aggravated by the ever newer chemistries emerging to meet process metrics. The virtual reactor would be a useless concept without an accompanying reliable database that consists of: thermal reaction pathways and rate constants, electron-molecule cross sections, thermochemical properties, transport properties, and finally, surface data on the interaction of radicals, atoms and ions with various surfaces. Large scale computational chemistry efforts are critical as experiments alone cannot meet database needs due to the difficulties associated with such
A computational model for doctoring fluid films in gravure printing
NASA Astrophysics Data System (ADS)
Hariprasad, Daniel S.; Grau, Gerd; Schunk, P. Randall; Tjiptowidjojo, Kristianto
2016-04-01
The wiping, or doctoring, process in gravure printing presents a fundamental barrier to resolving the micron-sized features desired in printed electronics applications. This barrier starts with the residual fluid film left behind after wiping, and its importance grows as feature sizes are reduced, especially as the feature size approaches the thickness of the residual fluid film. In this work, various mechanical complexities are considered in a computational model developed to predict the residual fluid film thickness. Lubrication models alone are inadequate, and deformation of the doctor blade body together with elastohydrodynamic lubrication must be considered to make the model predictive of experimental trends. Moreover, model results demonstrate that the particular form of the wetted region of the blade has a significant impact on the model's ability to reproduce experimental measurements.
Assessing executive function using a computer game: computational modeling of cognitive processes.
Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha
2014-07-01
Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home. PMID:25014944
Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.
2013-01-01
Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition. PMID:23346065
The influence of the target strength model on computed perforation
Reaugh, J.E.
1993-06-01
The authors used an axi-symmetric, two-dimensional Eulerian computer simulation program to simulate the penetration of a tungsten rod with length to diameter ratio L/D = 10 into a thick steel target, and the same rod into finite steel plates of thicknesses between 0.9 and 1.3 L. They compare the perforation limit with the semi-infinite penetration depth at the same velocity (the excess thickness) when the model for target strength is constant yield stress, and when the model incorporates work hardening and thermal softening. The authors also compare their computed results with available experimental results, which show an excess thickness of about 1 rod diameter.
Computer modelling of DNA structures involved in chromosome maintenance.
Eckdahl, T T; Anderson, J N
1987-01-01
Sequence-dependent DNA bending of synthetic and natural molecules was studied by computer analysis. Modelling of synthetic oligonucleotides and of 107 kb of natural sequences gave results which closely resembled published electrophoretic data, demonstrating the powerful predictive capacity of the procedure. The analysis was extended to the study of DNA structures involved in chromosome maintenance. Centromeric DNAs from yeast were found to have sequences in their functional elements which cause them to be unusually straight. Autonomous replicating sequences were found to have two structural domains, one consisting of unusually straight sequences surrounding the consensus and the other of bending elements in flanking DNA. In addition to a structural homology, centromeric and autonomous replicating sequences share common sequence elements. These observations show that computer modelling of natural sequences is a viable approach to the study of the biological implications of alternative DNA structures. PMID:3671091
A flexible self-learning model based on granular computing
NASA Astrophysics Data System (ADS)
Wei, Ting; Wu, Yu; Li, Yinguo
2007-04-01
Granular Computing(GrC) is an emerging theory which simulates the process of human brain understanding and solving problems. Rough set theory is a tool for dealing with uncertainty and vagueness aspects of knowledge model. SMLGrC algorithm introduces GrC to classical rough set algorithms, and makes the length of the rules relatively short but it can not process mass data sets. In order to solve this problem, based on the analysis of the hierarchical granular model of information table, the method of Granular Distribution List(GDL) is introduced to generate granule, and a granular computing algorithm(SLMGrC) is improved. Sample Covered Factor(SCF) is also introduced to control the generation of rules when the algorithm generates conflicting rules. The improved algorithm can process mass data sets directly without influencing the validity of SLMGrC. Experiments demonstrated the validity and flexibility of our method.
Advanced computer modeling techniques expand belt conveyor technology
Alspaugh, M.
1998-07-01
Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.
Photopolarimetry of scattering surfaces and their interpretation by computer model
NASA Technical Reports Server (NTRS)
Wolff, M.
1979-01-01
Wolff's computer model of a rough planetary surface was simplified and revised. Close adherence to the actual geometry of a pitted surface and the inclusion of a function for diffuse light resulted in a quantitative model comparable to observations by planetary satellites and asteroids. A function is also derived to describe diffuse light emitted from a particulate surface. The function is in terms of the indices of refraction of the surface material, particle size, and viewing angles. Computer-generated plots describe the observable and theoretical light components for the Moon, Mercury, Mars and a spectrum of asteroids. Other plots describe the effects of changing surface material properties. Mathematical results are generated to relate the parameters of the negative polarization branch to the properties of surface pitting. An explanation is offered for the polarization of the rings of Saturn, and the average diameter of ring objects is found to be 30 to 40 centimeters.
Computational modelling of cohesive cracks in material structures
NASA Astrophysics Data System (ADS)
Vala, J.; Jarošová, P.
2016-06-01
Analysis of crack formation, considered as the creation of new surfaces in a material sample due to its microstructure, leads to nontrivial physical, mathematical and computational difficulties even in the rather simple case of quasistatic cohesive zone modelling inside the linear elastic theory. However, quantitative results from such evaluations are required in practice for the development and design of advanced materials, structures and technologies. Although most available software tools apply ad hoc computational predictions, this paper presents the proper formulation of such model problem, including its verification, and sketches the more-scale construction of finite-dimensional approximation of solutions, utilizing the finite element or similar techniques, together with references to original simulations results from engineering practice.
Final Report: Center for Programming Models for Scalable Parallel Computing
Mellor-Crummey, John
2011-09-13
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Semianalytical computation of path lines for finite-difference models
Pollock, D.W.
1988-01-01
A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author
Computational modeling of dynamic behaviors of human teeth.
Liao, Zhipeng; Chen, Junning; Zhang, Zhongpu; Li, Wei; Swain, Michael; Li, Qing
2015-12-16
Despite the importance of dynamic behaviors of dental and periodontal structures to clinics, the biomechanical roles of anatomic sophistication and material properties in quantification of vibratory characteristics remain under-studied. This paper aimed to generate an anatomically accurate and structurally detailed 3D finite element (FE) maxilla model and explore the dynamic behaviors of human teeth through characterizing the natural frequencies (NFs) and mode shapes. The FE models with different levels of structural integrities and material properties were established to quantify the effects of modeling techniques on the computation of vibratory characteristics. The results showed that the integrity of computational model considerably influences the characterization of vibratory behaviors, as evidenced by declined NFs and perceptibly altered mode shapes resulting from the models with higher degrees of completeness and accuracy. A primary NF of 889Hz and the corresponding mode shape featuring linguo-buccal vibration of maxillary right 2nd molar were obtained based on the complete maxilla model. It was found that the periodontal ligament (PDL), a connective soft tissue, plays an important role in quantifying NFs. It was also revealed that damping and heterogeneity of materials contribute to the quantification of vibratory characteristics. The study provided important biomechanical insights and clinical references for future studies on dynamic behaviors of dental and periodontal structures. PMID:26584964
Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.
Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S
2015-07-21
The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays. PMID:26066997
A computational language approach to modeling prose recall in schizophrenia.
Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita
2014-06-01
Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall. PMID:24709122
Using computational modeling to predict arrhythmogenesis and antiarrhythmic therapy
Moreno, Jonathan D.; Clancy, Colleen E.
2010-01-01
The use of computational modeling to predict arrhythmia and arrhythmogensis is a relatively new field, but has nonetheless dramatically enhanced our understanding of the physiological and pathophysiological mechanisms that lead to arrhythmia. This review summarizes recent advances in the field of computational modeling approaches with a brief review of the evolution of cellular action potential models, and the incorporation of genetic mutations to understand fundamental arrhythmia mechanisms, including how simulations have revealed situation specific mechanisms leading to multiple phenotypes for the same genotype. The review then focuses on modeling drug blockade to understand how the less-than-intuitive effects some drugs have to either ameliorate or paradoxically exacerbate arrhythmia. Quantification of specific arrhythmia indicies are discussed at each spatial scale, from channel to tissue. The utility of hERG modeling to assess altered repolarization in response to drug blockade is also briefly discussed. Finally, insights gained from Ca2+ dynamical modeling and EC coupling, neurohumoral regulation of cardiac dynamics, and cell signaling pathways are also reviewed. PMID:20652086
Hydrogen program combustion research: Three dimensional computational modeling
Johnson, N.L.; Amsden, A.A.; Butler, T.D.
1995-05-01
We have significantly increased our computational modeling capability by the addition of a vertical valve model in KIVA-3, code used internationally for engine design. In this report the implementation and application of the valve model is described. The model is shown to reproduce the experimentally verified intake flow problem examined by Hessel. Furthermore, the sensitivity and performance of the model is examined for the geometry and conditions of the hydrogen-fueled Onan engine in development at Sandia National Laboratory. Overall the valve model is shown to have comparable accuracy as the general flow simulation capability in KIVA-3, which has been well validated by past comparisons to experiments. In the exploratory simulations of the Onan engine, the standard use of the single kinetic reaction for hydrogen oxidation was found to be inadequate for modeling the hydrogen combustion because of its inability to describe both the observed laminar flame speed and the absence of autoignition in the Onan engine. We propose a temporary solution that inhibits the autoignition without sacrificing the ability to model spark ignition. In the absence of experimental data on the Onan engine, a computational investigation was undertaken to evaluate the importance of modeling the intake flow on the combustion and NO{sub x} emissions. A simulation that began with the compression of a quiescent hydrogen-air mixture was compared to a simulation of the full induction process with resolved opening and closing of the intake valve. Although minor differences were observed in the cylinder-averaged pressure, temperature, bulk-flow kinetic energy and turbulent kinetic energy, large differences where observed in the hydrogen combustion rate and NO{sub x} emissions. The flow state at combustion is highly heterogeneous and sensitive to the details of the bulk and turbulent flow and that an accurate simulation of the Onan engine must include the modeling of the air-fuel induction.
Computer-Aided Process Model For Carbon/Phenolic Materials
NASA Technical Reports Server (NTRS)
Letson, Mischell A.; Bunker, Robert C.
1996-01-01
Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.
Computationally efficient, rotational nonequilibrium CW chemical laser model
Sentman, L.H.; Rushmore, W.
1981-10-01
The essential fluid dynamic and kinetic phenomena required for a quantitative, computationally efficient, rotational nonequilibrium model of a CW HF chemical laser are identified. It is shown that, in addition to the pumping, collisional deactivation, and rotational relaxation reactions, F-atom wall recombination, the hot pumping reaction, and multiquantum deactivation reactions play a significant role in determining laser performance. Several problems with the HF kinetics package are identified. The effect of various parameters on run time is discussed.
Modeling and analysis of the spread of computer virus
NASA Astrophysics Data System (ADS)
Zhu, Qingyi; Yang, Xiaofan; Ren, Jianguo
2012-12-01
Based on a set of reasonable assumptions, we propose a novel dynamical model describing the spread of computer virus. Through qualitative analysis, we give a threshold and prove that (1) the infection-free equilibrium is globally asymptotically stable if the threshold is less than one, implying that the virus would eventually die out, and (2) the infection equilibrium is globally asymptotically stable if the threshold is greater than one. Two numerical examples are presented to demonstrate the analytical results.
97. View of International Business Machine (IBM) digital computer model ...
97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
98. View of IBM digital computer model 7090 magnet core ...
98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK
An overview of recent applications of computational modelling in neonatology
Wrobel, Luiz C.; Ginalski, Maciej K.; Nowak, Andrzej J.; Ingham, Derek B.; Fic, Anna M.
2010-01-01
This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass-transfer mechanisms taking place in medical devices, such as incubators, radiant warmers and oxygen hoods. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and improving the design of medical devices. PMID:20439275
Computer tomography of flows external to test models
NASA Technical Reports Server (NTRS)
Prikryl, I.; Vest, C. M.
1982-01-01
Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.
Modeling of supersonic combustor flows using parallel computing
NASA Technical Reports Server (NTRS)
Riggins, D.; Underwood, M.; Mcmillin, B.; Reeves, L.; Lu, E. J.-L.
1992-01-01
While current 3D CFD codes and modeling techniques have been shown capable of furnishing engineering data for complex scramjet flowfields, the usefulness of such efforts is primarily limited by solutions' CPU time requirements, and secondarily by memory requirements. Attention is presently given to the use of parallel computing capabilities for engineering CFD tools for the analysis of supersonic reacting flows, and to an illustrative incompressible CFD problem using up to 16 iPSC/2 processors with single-domain decomposition.
Computational modeling of epilepsy for an experimental neurologist
Holt, Abbey B.; Netoff, Theoden I.
2013-01-01
Computational modeling can be a powerful tool for an experimentalist, providing a rigorous mathematical model of the system you are studying. This can be valuable in testing your hypotheses and developing experimental protocols prior to experimenting. This paper reviews models of seizures and epilepsy at different scales, including cellular, network, cortical region, and brain scales by looking at how they have been used in conjunction with experimental data. At each scale, models with different levels of abstraction, the extraction of physiological detail, are presented. Varying levels of detail are necessary in different situations. Physiologically realistic models are valuable surrogates for experimental systems because, unlike in an experiment, every parameter can be changed and every variable can be observed. Abstract models are useful in determining essential parameters of a system, allowing the experimentalist to extract principles that explain the relationship between mechanisms and the behavior of the system. Modeling is becoming easier with the emergence of platforms dedicated to neuronal modeling and databases of models that can be downloaded. Modeling will never be a replacement for animal and clinical experiments, but it should be a starting point in designing experiments and understanding their results. PMID:22617489
A new computational growth model for sea urchin skeletons.
Zachos, Louis G
2009-08-01
A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era. PMID:19376133
Parallel computer processing and modeling: applications for the ICU
NASA Astrophysics Data System (ADS)
Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.
2003-07-01
Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.
Verification of computational models of cardiac electro-physiology.
Pathmanathan, Pras; Gray, Richard A
2014-05-01
For computational models of cardiac activity to be used in safety-critical clinical decision-making, thorough and rigorous testing of the accuracy of predictions is required. The field of 'verification, validation and uncertainty quantification' has been developed to evaluate the credibility of computational predictions. The first stage, verification, is the evaluation of how well computational software correctly solves the underlying mathematical equations. The aim of this paper is to introduce novel methods for verifying multi-cellular electro-physiological solvers, a crucial first stage for solvers to be used with confidence in clinical applications. We define 1D-3D model problems with exact solutions for each of the monodomain, bidomain, and bidomain-with-perfusing-bath formulations of cardiac electro-physiology, which allow for the first time the testing of cardiac solvers against exact errors on fully coupled problems in all dimensions. These problems are carefully constructed so that they can be easily run using a general solver and can be used to greatly increase confidence that an implementation is correct, which we illustrate by testing one major solver, 'Chaste', on the problems. We then perform case studies on calculation verification (also known as solution verification) for two specific applications. We conclude by making several recommendations regarding verification in cardiac modelling. PMID:24259465
Computational Modeling of Ultrafast Pulse Propagation in Nonlinear Optical Materials
NASA Technical Reports Server (NTRS)
Goorjian, Peter M.; Agrawal, Govind P.; Kwak, Dochan (Technical Monitor)
1996-01-01
There is an emerging technology of photonic (or optoelectronic) integrated circuits (PICs or OEICs). In PICs, optical and electronic components are grown together on the same chip. rib build such devices and subsystems, one needs to model the entire chip. Accurate computer modeling of electromagnetic wave propagation in semiconductors is necessary for the successful development of PICs. More specifically, these computer codes would enable the modeling of such devices, including their subsystems, such as semiconductor lasers and semiconductor amplifiers in which there is femtosecond pulse propagation. Here, the computer simulations are made by solving the full vector, nonlinear, Maxwell's equations, coupled with the semiconductor Bloch equations, without any approximations. The carrier is retained in the description of the optical pulse, (i.e. the envelope approximation is not made in the Maxwell's equations), and the rotating wave approximation is not made in the Bloch equations. These coupled equations are solved to simulate the propagation of femtosecond optical pulses in semiconductor materials. The simulations describe the dynamics of the optical pulses, as well as the interband and intraband.
ERIC Educational Resources Information Center
Nee, John G.; Kare, Audhut P.
1987-01-01
Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)
Benchmarking ocean circulation models on massively parallel computers
Poling, D.A.
1997-08-01
General circulation models are becoming the premier theoretical tools for studying the complex structure of the global climate. GEONET was envisioned as exercising the resources developed for the nuclear weapons program to address environmental problems. The similarity of circulation models to weapons codes made them an attractive field for them to develop expertise. The author hoped to become an active player in mainline climate research through computer simulation. This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The intention of this research was to establish the Laboratory in mainstream climate research in conjunction with the GEONET project.
Computational Materials: Modeling and Simulation of Nanostructured Materials and Systems
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Hinkley, Jeffrey A.
2003-01-01
The paper provides details on the structure and implementation of the Computational Materials program at the NASA Langley Research Center. Examples are given that illustrate the suggested approaches to predicting the behavior and influencing the design of nanostructured materials such as high-performance polymers, composites, and nanotube-reinforced polymers. Primary simulation and measurement methods applicable to multi-scale modeling are outlined. Key challenges including verification and validation of models are highlighted and discussed within the context of NASA's broad mission objectives.
HMcode: Halo-model matter power spectrum computation
NASA Astrophysics Data System (ADS)
Mead, Alexander
2015-08-01
HMcode computes the halo-model matter power spectrum. It is written in Fortran90 and has been designed to quickly (~0.5s for 200 k-values across 16 redshifts on a single core) produce matter spectra for a wide range of cosmological models. In testing it was shown to match spectra produced by the 'Coyote Emulator' to an accuracy of 5 per cent for k less than 10h Mpc^-1. However, it can also produce spectra well outside of the parameter space of the emulator.
Fundamentals of Modeling, Data Assimilation, and High-performance Computing
NASA Technical Reports Server (NTRS)
Rood, Richard B.
2005-01-01
This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.
Computer modelling of turbulent recirculating flows in engineering applications
NASA Astrophysics Data System (ADS)
Khalil, E. E.; Assaf, H. M. W.
A numerical computation procedure for solving the partial differential equations governing turbulent flows is presented, with an emphasis on swirling flows. The conservation equations for mass and momentum are defined, noting the inclusion of turbulence characteristics in Reynolds stress terms. A two-dimensional turbulence model is used, based on an eddy viscosity concept, with the Reynolds stress described in terms of the mean velocity gradient and the eddy viscosity. The model is used for the flow in a rotary air garbage classifier and the flow in a vortex tube. The flexibility of the technique is demonstrated through variations of the initial flow parameters.
Computational modeling of the obstructive lung diseases asthma and COPD.
Burrowes, Kelly Suzanne; Doel, Tom; Brightling, Chris
2014-11-28
Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow imitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the currentstate
Computational modeling of the obstructive lung diseases asthma and COPD
2014-01-01
Asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow limitation and pose a huge burden to society. These obstructive lung diseases impact the lung physiology across multiple biological scales. Environmental stimuli are introduced via inhalation at the organ scale, and consequently impact upon the tissue, cellular and sub-cellular scale by triggering signaling pathways. These changes are propagated upwards to the organ level again and vice versa. In order to understand the pathophysiology behind these diseases we need to integrate and understand changes occurring across these scales and this is the driving force for multiscale computational modeling. There is an urgent need for improved diagnosis and assessment of obstructive lung diseases. Standard clinical measures are based on global function tests which ignore the highly heterogeneous regional changes that are characteristic of obstructive lung disease pathophysiology. Advances in scanning technology such as hyperpolarized gas MRI has led to new regional measurements of ventilation, perfusion and gas diffusion in the lungs, while new image processing techniques allow these measures to be combined with information from structural imaging such as Computed Tomography (CT). However, it is not yet known how to derive clinical measures for obstructive diseases from this wealth of new data. Computational modeling offers a powerful approach for investigating this relationship between imaging measurements and disease severity, and understanding the effects of different disease subtypes, which is key to developing improved diagnostic methods. Gaining an understanding of a system as complex as the respiratory system is difficult if not impossible via experimental methods alone. Computational models offer a complementary method to unravel the structure-function relationships occurring within a multiscale, multiphysics system such as this. Here we review the current
Computational modeling of RNA 3D structures and interactions.
Dawson, Wayne K; Bujnicki, Janusz M
2016-04-01
RNA molecules have key functions in cellular processes beyond being carriers of protein-coding information. These functions are often dependent on the ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is difficult, which has prompted the development of computational methods for structure prediction from sequence. Recent progress in 3D structure modeling of RNA and emerging approaches for predicting RNA interactions with ions, ligands and proteins have been stimulated by successes in protein 3D structure modeling. PMID:26689764
Computational Models of Ventilator Induced Lung Injury and Surfactant Dysfunction
Bates, Jason H.T.; Smith, Bradford J.; Allen, Gilman B.
2014-01-01
Managing acute respiratory distress syndrome (ARDS) invariably involves the administration of mechanical ventilation, the challenge being to avoid the iatrogenic sequellum known as ventilator-induced lung injury (VILI). Devising individualized ventilation strategies in ARDS requires that patient-specific lung physiology be taken into account, and this is greatly aided by the use of computational models of lung mechanical function that can be matched to physiological measurements made in a given patient. In this review, we discuss recent models that have the potential to serve as the basis for devising minimally injurious modes of mechanical ventilation in ARDS patients. PMID:26904138
COMPUTATIONAL AND EXPERIMENTAL MODELING OF SLURRY BUBBLE COLUMN REACTORS
Paul Lam; Dimitri Gidaspow
2000-09-01
The objective if this study was to develop a predictive experimentally verified computational fluid dynamics (CFD) model for gas-liquid-solid flow. A three dimensional transient computer code for the coupled Navier-Stokes equations for each phase was developed. The principal input into the model is the viscosity of the particulate phase which was determined from a measurement of the random kinetic energy of the 800 micron glass beads and a Brookfield viscometer. The computed time averaged particle velocities and concentrations agree with PIV measurements of velocities and concentrations, obtained using a combination of gamma-ray and X-ray densitometers, in a slurry bubble column, operated in the bubbly-coalesced fluidization regime with continuous flow of water. Both the experiment and the simulation show a down-flow of particles in the center of the column and up-flow near the walls and nearly uniform particle concentration. Normal and shear Reynolds stresses were constructed from the computed instantaneous particle velocities. The PIV measurement and the simulation produced instantaneous particle velocities. The PIV measurement and the simulation produced similar nearly flat horizontal profiles of turbulent kinetic energy of particles. This phase of the work was presented at the Chemical Reaction Engineering VIII: Computational Fluid Dynamics, August 6-11, 2000 in Quebec City, Canada. To understand turbulence in risers, measurements were done in the IIT riser with 530 micron glass beads using a PIV technique. The results together with simulations will be presented at the annual meeting of AIChE in November 2000.
A computational model predicting disruption of blood vessel development.
Kleinstreuer, Nicole; Dix, David; Rountree, Michael; Baker, Nancy; Sipes, Nisha; Reif, David; Spencer, Richard; Knudsen, Thomas
2013-04-01
Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis) and remodeling (angiogenesis) come from a variety of biological pathways linked to endothelial cell (EC) behavior, extracellular matrix (ECM) remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/) modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS) dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a morphogenetic
Computational modeling of pedunculopontine nucleus deep brain stimulation
NASA Astrophysics Data System (ADS)
Zitella, Laura M.; Mohsenian, Kevin; Pahwa, Mrinal; Gloeckner, Cory; Johnson, Matthew D.
2013-08-01
Objective. Deep brain stimulation (DBS) near the pedunculopontine nucleus (PPN) has been posited to improve medication-intractable gait and balance problems in patients with Parkinson's disease. However, clinical studies evaluating this DBS target have not demonstrated consistent therapeutic effects, with several studies reporting the emergence of paresthesia and oculomotor side effects. The spatial and pathway-specific extent to which brainstem regions are modulated during PPN-DBS is not well understood. Approach. Here, we describe two computational models that estimate the direct effects of DBS in the PPN region for human and translational non-human primate (NHP) studies. The three-dimensional models were constructed from segmented histological images from each species, multi-compartment neuron models and inhomogeneous finite element models of the voltage distribution in the brainstem during DBS. Main Results. The computational models predicted that: (1) the majority of PPN neurons are activated with -3 V monopolar cathodic stimulation; (2) surgical targeting errors of as little as 1 mm in both species decrement activation selectivity; (3) specifically, monopolar stimulation in caudal, medial, or anterior PPN activates a significant proportion of the superior cerebellar peduncle (up to 60% in the human model and 90% in the NHP model at -3 V) (4) monopolar stimulation in rostral, lateral or anterior PPN activates a large percentage of medial lemniscus fibers (up to 33% in the human model and 40% in the NHP model at -3 V) and (5) the current clinical cylindrical electrode design is suboptimal for isolating the modulatory effects to PPN neurons. Significance. We show that a DBS lead design with radially-segmented electrodes may yield improved functional outcome for PPN-DBS.
Dynamic modeling of Tampa Bay urban development using parallel computing
Xian, G.; Crane, M.; Steinwand, D.
2005-01-01
Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively. ?? 2005 Elsevier Ltd. All rights reserved.
Strategy generalization across orientation tasks: testing a computational cognitive model.
Gunzelmann, Glenn
2008-07-01
Humans use their spatial information processing abilities flexibly to facilitate problem solving and decision making in a variety of tasks. This article explores the question of whether a general strategy can be adapted for performing two different spatial orientation tasks by testing the predictions of a computational cognitive model. Human performance was measured on an orientation task requiring participants to identify the location of a target either on a map (find-on-map) or within an egocentric view of a space (find-in-scene). A general strategy instantiated in a computational cognitive model of the find-on-map task, based on the results from Gunzelmann and Anderson (2006), was adapted to perform both tasks and used to generate performance predictions for a new study. The qualitative fit of the model to the human data supports the view that participants were able to tailor a general strategy to the requirements of particular spatial tasks. The quantitative differences between the predictions of the model and the performance of human participants in the new experiment expose individual differences in sample populations. The model provides a means of accounting for those differences and a framework for understanding how human spatial abilities are applied to naturalistic spatial tasks that involve reasoning with maps. PMID:21635355