Science.gov

Sample records for de-fg03-01er54617 computer modeling

  1. Final Report DOE Grant No. DE-FG03-01ER54617 Computer Modeling of Microturbulence and Macrostability Properties of Magnetically Confined Plasmas

    SciTech Connect

    Jean-Noel Leboeuf

    2004-03-04

    OAK-B135 We have made significant progress during the past grant period in several key areas of the UCLA and national Fusion Theory Program. This impressive body of work includes both fundamental and applied contributions to MHD and turbulence in DIII-D and Electric Tokamak plasmas, and also to Z-pinches, particularly with respect to the effect of flows on these phenomena. We have successfully carried out interpretive and predictive global gyrokinetic particle-in-cell calculations of DIII-D discharges. We have cemented our participation in the gyrokinetic PIC effort of the SciDAC Plasma Microturbulence Project through working membership in the Summit Gyrokinetic PIC Team. We have continued to teach advanced courses at UCLA pertaining to computational plasma physics and to foster interaction with students and junior researchers. We have in fact graduated 2 Ph. D. students during the past grant period. The research carried out during that time has resulted in many publications in the premier plasma physics and fusion energy sciences journals and in several invited oral communications at major conferences such as Sherwood, Transport Task Force (TTF), the annual meetings of the Division of Plasma Physics of the American Physical Society, of the European Physical Society, and the 2002 IAEA Fusion Energy Conference, FEC 2002. Many of these have been authored and co-authored with experimentalists at DIII-D.

  2. Computational Modeling of Tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)

    1995-01-01

    This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.

  3. Computer Models of Proteins

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.

  4. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. Computer Model Documentation Guide.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    These guidelines for communicating effectively the details of computer model design and operation to persons with varying interests in a model recommend the development of four different types of manuals to meet the needs of managers, users, analysts and programmers. The guidelines for preparing a management summary manual suggest a broad spectrum…

  6. Understanding student computational thinking with computational modeling

    NASA Astrophysics Data System (ADS)

    Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.

    2013-01-01

    Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.

  7. Computer modeling of polymers

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1988-01-01

    A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.

  8. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  9. Computer modeling of photodegradation

    NASA Technical Reports Server (NTRS)

    Guillet, J.

    1986-01-01

    A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.

  10. Computer cast blast modelling

    SciTech Connect

    Chung, S.; McGill, M.; Preece, D.S.

    1994-07-01

    Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.

  11. Computer modeling of Epilepsy

    PubMed Central

    Lytton, William W.

    2009-01-01

    Preface Epilepsy is a complex set of disorders that can involve many areas of cortex as well as underlying deep brain systems. The myriad manifestations of seizures, as varied as déjà vu and olfactory hallucination, can thereby give researchers insights into regional functions and relations. Epilepsy is also complex genetically and pathophysiologically, involving microscopic (ion channels, synaptic proteins), macroscopic (brain trauma and rewiring) and intermediate changes in a complex interplay of causality. It has long been recognized that computer modeling will be required to disentangle causality, to better understand seizure spread and to understand and eventually predict treatment efficacy. Over the past few years, substantial progress has been made modeling epilepsy at levels ranging from the molecular to the socioeconomic. We review these efforts and connect them to the medical goals of understanding and treating this disorder. PMID:18594562

  12. Computational Modeling Program

    NASA Technical Reports Server (NTRS)

    Govindan, T. R.; Davis, Robert J.

    1998-01-01

    An Integrated Product Team (IPT) has been formed at NASA Ames Research Center which has set objectives to investigate devices and processes suitable for meeting NASA requirements on ultrahigh performance computers, fast and low power devices, and high temperature wide bandgap materials. These devices may ultimately be sub-100nm feature-size. Processes and equipment must meet the stringent demands posed by the fabrication of such small devices. Until now, the reactors for Chemical Vapor Deposition (CVD) and plasma processes have been designed by trial and error procedures. Further, once the reactor is in place, optimum processing parameters are found through expensive and time-consuming experimentation. If reliable models are available that describe processes and the operation of the reactors, that chore would be reduced to a routine task while being a cost-effective option. The goal is to develop such a design tool, validate that tool using available data from current generation processes and reactors, and then use that tool to explore avenues for meeting NASA needs for ultrasmall device fabrication. Under the present grant, ARL/Penn State along with other IPT members has been developing models and computer code to meet IPT goals. Some of the accomplishments achieved during the first year of the grant are described in this report

  13. Computational modelling of polymers

    NASA Technical Reports Server (NTRS)

    Celarier, Edward A.

    1991-01-01

    Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.

  14. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  15. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  16. Computation models of discourse

    SciTech Connect

    Brady, M.; Berwick, R.C.

    1983-01-01

    This book presents papers on artificial intelligence and natural language. Topics considered include recognizing intentions from natural language utterances, cooperative responses from a portable natural language database query system, natural language generation as a computational problem, focusing in the comprehension of definite anaphora, and factors in forming discourse-dependent descriptions.

  17. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  18. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  19. Computational models of syntactic acquisition.

    PubMed

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  20. toolkit computational mesh conceptual model.

    SciTech Connect

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  1. Component Breakout Computer Model

    DTIC Science & Technology

    1987-04-29

    Weapon Systems: A Policy Analysis." The Rand Graduate Institute. November 1983. Boger . D. "Statistical Models for Estimating Overhead Costs." M. S...SQUARE SCREEN PROGRAM BO DLS 70 LOCATE 3,5 100 PRINT " I I I I I I I I I I I I I I t I I I t I I i iiitiii I I I I i t I i 110 LOCATE 4,5 I 20...GOTO 4620 4610 REM ***********«««*«««**#«***********#******»,*###!^5|[^,„<c#,5|c„ dl -r C^M EED SUPPORT .c.50 REM A6(6)...N0 OF EMPLOYEES 4660 IF

  2. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  3. Efficient Computational Model of Hysteresis

    NASA Technical Reports Server (NTRS)

    Shields, Joel

    2005-01-01

    A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.

  4. Ch. 33 Modeling: Computational Thermodynamics

    SciTech Connect

    Besmann, Theodore M

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  5. Computational Modeling of Multiphase Reactors.

    PubMed

    Joshi, J B; Nandakumar, K

    2015-01-01

    Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling.

  6. Computational models of adult neurogenesis

    NASA Astrophysics Data System (ADS)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  7. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  8. Computational modelling approaches to vaccinology.

    PubMed

    Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo

    2015-02-01

    Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.

  9. Neurometric Modeling: Computational Modeling of Individual Brains

    DTIC Science & Technology

    2011-05-16

    Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Neural networks, computational neuroscience, fMRI ...obtained using functional MRI. Algorithmic processing of these measurements can exploit a variety of statistical machine learning methods to... statistical machine learning methods to synthesize a new kind of neuro-cognitive model, which we call neurometric models. These executable models could be

  10. Computational Modeling for Bedside Application

    PubMed Central

    Kerckhoffs, Roy C.P.; Narayan, Sanjiv M.; Omens, Jeffrey H.; Mulligan, Lawrence J.; McCulloch, Andrew D.

    2008-01-01

    With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988

  11. Computational Model for Cell Morphodynamics

    NASA Astrophysics Data System (ADS)

    Shao, Danying; Rappel, Wouter-Jan; Levine, Herbert

    2010-09-01

    We develop a computational model, based on the phase-field method, for cell morphodynamics and apply it to fish keratocytes. Our model incorporates the membrane bending force and the surface tension and enforces a constant area. Furthermore, it implements a cross-linked actin filament field and an actin bundle field that are responsible for the protrusion and retraction forces, respectively. We show that our model predicts steady state cell shapes with a wide range of aspect ratios, depending on system parameters. Furthermore, we find that the dependence of the cell speed on this aspect ratio matches experimentally observed data.

  12. Visualizing ultrasound through computational modeling

    NASA Technical Reports Server (NTRS)

    Guo, Theresa W.

    2004-01-01

    The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.

  13. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  14. Cosmic logic: a computational model

    SciTech Connect

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  15. Minimal Models of Multidimensional Computations

    PubMed Central

    Fitzgerald, Jeffrey D.; Sincich, Lawrence C.; Sharpee, Tatyana O.

    2011-01-01

    The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs. PMID:21455284

  16. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)

    1994-01-01

    The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.

  17. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    PubMed

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-07

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.

  18. Computing Models for FPGA-Based Accelerators

    PubMed Central

    Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt

    2011-01-01

    Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152

  19. Computational modeling of epithelial tissues.

    PubMed

    Smallwood, Rod

    2009-01-01

    There is an extensive literature on the computational modeling of epithelial tissues at all levels from subcellular to whole tissue. This review concentrates on behavior at the individual cell to whole tissue level, and particularly on organizational aspects, and provides an indication of where information from other areas, such as the modeling of angiogenesis, is relevant. The skin, and the lining of all of the body cavities (lung, gut, cervix, bladder etc) are epithelial tissues, which in a topological sense are the boundary between inside and outside the body. They are thin sheets of cells (usually of the order of 0.5 mm thick) without extracellular matrix, have a relatively simple structure, and contain few types of cells. They have important barrier, secretory and transport functions, which are essential for the maintenance of life, so homeostasis and wound healing are important aspects of the behavior of epithelial tissues. Carcinomas originate in epithelial tissues.There are essentially two approaches to modeling tissues--to start at the level of the tissue (i.e., a length scale of the order of 1 mm) and develop generalized equations for behavior (a continuum approach); or to start at the level of the cell (i.e., a length scale of the order of 10 µm) and develop tissue behavior as an emergent property of cellular behavior (an individual-based approach). As will be seen, these are not mutually exclusive approaches, and they come in a variety of flavors.

  20. Computational model for chromosomal instabilty

    NASA Astrophysics Data System (ADS)

    Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina

    2015-03-01

    Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.

  1. Computational modeling of membrane proteins

    PubMed Central

    Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.

    2014-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688

  2. Cupola Furnace Computer Process Model

    SciTech Connect

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  3. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  4. Computer Modeling of a Fusion Plasma

    SciTech Connect

    Cohen, B I

    2000-12-15

    Progress in the study of plasma physics and controlled fusion has been profoundly influenced by dramatic increases in computing capability. Computational plasma physics has become an equal partner with experiment and traditional theory. This presentation illustrates some of the progress in computer modeling of plasma physics and controlled fusion.

  5. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  6. Predictive Models and Computational Toxicology

    EPA Science Inventory

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...

  7. Connectionist Models for Intelligent Computation.

    DTIC Science & Technology

    1988-08-31

    Studies and Department of Physics and Astronomy and Institute for Advanced Computer Studies TInivpr-%tv of Maryland College Park, MD 20742 ABSTRACT A...distributed in the network. II. TRAINING OF THE NETWORK The stereo vision is achieved by detecting the binocular disparity of the two images observed by...SUN, Y.C. LEE and H.H. CHEN oli toSios, d Department of Physics and Astronomy SO and ent tio: Institute for Advanced Computer Studies inhowve

  8. Applications of computer modeling to fusion research

    SciTech Connect

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

  9. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  10. Leverage points in a computer model

    NASA Astrophysics Data System (ADS)

    Janošek, Michal

    2016-06-01

    This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.

  11. Model Railroading and Computer Fundamentals

    ERIC Educational Resources Information Center

    McCormick, John W.

    2007-01-01

    Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…

  12. Computational modeling of peripheral pain: a commentary.

    PubMed

    Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S

    2015-06-11

    This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.

  13. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  14. Geometric Modeling for Computer Vision

    DTIC Science & Technology

    1974-10-01

    Vision and Artificial Intellegence could lead to robots, androids and cyborgs which will be able to see, to think and to feel conscious 10.4...the construction of computer representations of physical objects, cameras, images and light for the sake of simulating their behavior. In Artificial ...specifically, I wish to exclude the connotation that the theory is a natural theory of vision. Perhaps there can be such a thing as an artificial theory

  15. Computational Model for Armor Penetration

    DTIC Science & Technology

    1987-10-01

    the penetration calculation with a slide line in the target, the impact velocity was artificially raised to avoid impact of the projectile sides onto...Lagrangian equations governing motion of a continuous medium. The solution technique is called the method of artificial viscosity because of the...fronts, although no discontinuities occur in the computed flow field. With this artificial viscosity method, the equations of continuous flow can be

  16. Ranked retrieval of Computational Biology models

    PubMed Central

    2010-01-01

    Background The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Results Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. Conclusions The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models. PMID:20701772

  17. Computational Model for Corneal Transplantation

    NASA Astrophysics Data System (ADS)

    Cabrera, Delia

    2003-10-01

    We evaluated the refractive consequences of corneal transplants using a biomechanical model with homogeneous and inhomogeneous Young's modulus distributions within the cornea, taking into account ablation of some stromal tissue. A FEM model was used to simulate corneal transplants in diseased cornea. The diseased cornea was modeled as an axisymmetric structure taking into account a nonlinearly elastic, isotropic formulation. The model simulating the penetrating keratoplasty procedure gives more change in the postoperative corneal curvature when compared to the models simulating the anterior and posterior lamellar graft procedures. When a lenticle shaped tissue was ablated in the graft during the anterior and posterior keratoplasty, the models provided an additional correction of about -3.85 and -4.45 diopters, respectively. Despite the controversy around the corneal thinning disorders treatment with volume removal procedures, results indicate that significant changes in corneal refractive power could be introduced by a corneal transplantation combined with myopic laser ablation.

  18. Computational Model Optimization for Enzyme Design Applications

    DTIC Science & Technology

    2007-11-02

    naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.

  19. Computer modeling of human decision making

    NASA Technical Reports Server (NTRS)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  20. A new epidemic model of computer viruses

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  1. Applications of Computational Modeling in Cardiac Surgery

    PubMed Central

    Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.

    2014-01-01

    Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036

  2. COSP - A computer model of cyclic oxidation

    NASA Technical Reports Server (NTRS)

    Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.

    1991-01-01

    A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.

  3. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  4. Enhanced absorption cycle computer model

    NASA Astrophysics Data System (ADS)

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.

  5. Computer Model Locates Environmental Hazards

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  6. Computational Viscoplasticity Based on Overstress (CVBO) Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zheng; Ruggles-wrenn, Marina; Fish, Jacob

    2014-03-01

    This article presents an efficient computational viscoplasticity based on an overstress (CVBO) model, including three-dimensional formulation, implicit stress update procedures, consistent tangent, and systematic calibration of the model parameters to experimental data. The model has been validated for PMR 15 neat resin, including temperature and aging dependence.

  7. Computer Modeling of Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Hashim, Rauzah

    This chapter outlines the methodologies and models which are commonly used in the simulation of liquid crystals. The approach in the simulation of liquid crystals has always been to understand the nature of the phase and to relate this to fundamental molecular features such as geometry and intermolecular forces, before important properties related to certain applications are elucidated. Hence, preceding the description of the main "molecular-based" models for liquid crystals, a general but brief outline of the nature of liquid crystals and their historical development is given. Three main model classes, namely the coarse-grained single-site lattice and Gay-Berne models and the full atomistic model will be described here where for each a brief review will be given followed by assessment of its application in describing the phase phenomena with an emphasis on understanding the molecular organization in liquid crystal phases and the prediction of their bulk properties. Variants and hybrid models derived from these classes and their applications are given.

  8. ESPC Computational Efficiency of Earth System Models

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM

  9. Comprehensive silicon solar-cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    A comprehensive silicon solar cell computer modeling scheme was developed to perform the following tasks: (1) model and analysis of the net charge distribution in quasineutral regions; (2) experimentally determined temperature behavior of Spire Corp. n+pp+ solar cells where n+-emitter is formed by ion implantation of 75As or 31P; and (3) initial validation results of computer simulation program using Spire Corp. n+pp+ cells.

  10. Parallel computing in atmospheric chemistry models

    SciTech Connect

    Rotman, D.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  11. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  12. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  13. Computational study of lattice models

    NASA Astrophysics Data System (ADS)

    Zujev, Aleksander

    This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.

  14. Computational modeling of peptide-aptamer binding.

    PubMed

    Rhinehardt, Kristen L; Mohan, Ram V; Srinivas, Goundla

    2015-01-01

    Evolution is the progressive process that holds each living creature in its grasp. From strands of DNA evolution shapes life with response to our ever-changing environment and time. It is the continued study of this most primitive process that has led to the advancement of modern biology. The success and failure in the reading, processing, replication, and expression of genetic code and its resulting biomolecules keep the delicate balance of life. Investigations into these fundamental processes continue to make headlines as science continues to explore smaller scale interactions with increasing complexity. New applications and advanced understanding of DNA, RNA, peptides, and proteins are pushing technology and science forward and together. Today the addition of computers and advances in science has led to the fields of computational biology and chemistry. Through these computational advances it is now possible not only to quantify the end results but also visualize, analyze, and fully understand mechanisms by gaining deeper insights. The biomolecular motion that exists governing the physical and chemical phenomena can now be analyzed with the advent of computational modeling. Ever-increasing computational power combined with efficient algorithms and components are further expanding the fidelity and scope of such modeling and simulations. This chapter discusses computational methods that apply biological processes, in particular computational modeling of peptide-aptamer binding.

  15. Evaluation and Comparison of Computational Models

    PubMed Central

    Myung, Jay; Tang, Yun; Pitt, Mark A.

    2009-01-01

    Computational models are powerful tools that can enhance the understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying the adequacy of a model, and possibly its superiority to other models, are understood. This chapter begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. The final section demonstrates the use of five versatile and easy-to-use selection methods for choosing between two mathematical models of protein folding. PMID:19216931

  16. Mechanistic models in computational social science

    NASA Astrophysics Data System (ADS)

    Holme, Petter; Liljeros, Fredrik

    2015-09-01

    Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

  17. A computational model of the cerebellum

    SciTech Connect

    Travis, B.J.

    1990-01-01

    The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.

  18. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  19. Modeling User Behavior in Computer Learning Tasks.

    ERIC Educational Resources Information Center

    Mantei, Marilyn M.

    Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…

  20. Computer Modeling and Visualization in Design Technology: An Instructional Model.

    ERIC Educational Resources Information Center

    Guidera, Stan

    2002-01-01

    Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…

  1. Do's and Don'ts of Computer Models for Planning

    ERIC Educational Resources Information Center

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  2. EWE: A computer model for ultrasonic inspection

    NASA Astrophysics Data System (ADS)

    Douglas, S. R.; Chaplin, K. R.

    1991-11-01

    The computer program EWE simulates the propagation of elastic waves in solids and liquids. It was applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues.

  3. CDF computing and event data models

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-12-01

    The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

  4. Computational disease modeling – fact or fiction?

    PubMed Central

    Tegnér, Jesper N; Compte, Albert; Auffray, Charles; An, Gary; Cedersund, Gunnar; Clermont, Gilles; Gutkin, Boris; Oltvai, Zoltán N; Stephan, Klaas Enno; Thomas, Randy; Villoslada, Pablo

    2009-01-01

    Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems. PMID:19497118

  5. An improved computational constitutive model for glass

    NASA Astrophysics Data System (ADS)

    Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.

    2017-01-01

    In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.

  6. A computational model of selection by consequences.

    PubMed Central

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512

  7. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  8. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  9. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  10. Empirical Movement Models for Brain Computer Interfaces.

    PubMed

    Matlack, Charles; Chizeck, Howard; Moritz, Chet T

    2016-06-30

    For brain-computer interfaces (BCIs) which provide the user continuous position control, there is little standardization of performance metrics or evaluative tasks. One candidate metric is Fitts's law, which has been used to describe aimed movements across a range of computer interfaces, and has recently been applied to BCI tasks. Reviewing selected studies, we identify two basic problems with Fitts's law: its predictive performance is fragile, and the estimation of 'information transfer rate' from the model is unsupported. Our main contribution is the adaptation and validation of an alternative model to Fitts's law in the BCI context. We show that the Shannon-Welford model outperforms Fitts's law, showing robust predictive power when target distance and width have disproportionate effects on difficulty. Building on a prior study of the Shannon-Welford model, we show that identified model parameters offer a novel approach to quantitatively assess the role of controldisplay gain in speed/accuracy performance tradeoffs during brain control.

  11. Computational Spectrum of Agent Model Simulation

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.

  12. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  13. Computational Modeling of Inflammation and Wound Healing

    PubMed Central

    Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram

    2013-01-01

    Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362

  14. Utilizing computer models for optimizing classroom acoustics

    NASA Astrophysics Data System (ADS)

    Hinckley, Jennifer M.; Rosenberg, Carl J.

    2002-05-01

    The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.

  15. Global detailed geoid computation and model analysis

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Vincent, S.

    1974-01-01

    Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.

  16. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  17. Computational models for synthetic marine infrared clutter

    NASA Astrophysics Data System (ADS)

    Constantikes, Kim T.; Zysnarski, Adam H.

    1996-06-01

    The next generation of ship defense missiles will need to engage stealthy, passive, sea-skimming missiles. Detection and guidance will occur against a background of sea surface and horizon which can present significant clutter problems for infrared seekers, particularly when targets are comparatively dim. We need a variety of sea clutter models: statistical image models for signal processing algorithm design, clutter occurrence models for systems effectiveness assessment, and constructive image models for synthesizing very large field-of-view (FOV) images with high spatial and temporal resolution. We have implemented and tested such a constructive model. First principle models of water waves and light transport provide a computationally intensive clutter model implemented as a raytracer. Our models include sea, sky, and solar radiance; reflectance; attenuating atmospheres; constructive solid geometry targets; target and water wave dynamics; and simple sensor image formation.

  18. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  19. Optical Computing Based on Neuronal Models

    DTIC Science & Technology

    1988-05-01

    walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to

  20. Computer Modelling of Photochemical Smog Formation

    ERIC Educational Resources Information Center

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  1. Applications of computational modeling in ballistics

    NASA Technical Reports Server (NTRS)

    Sturek, Walter B.

    1987-01-01

    The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.

  2. Informing Mechanistic Toxicology with Computational Molecular Models

    EPA Science Inventory

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...

  3. Evaluating computational models of cholesterol metabolism.

    PubMed

    Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K

    2015-10-01

    Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.

  4. Research and Development Project Prioritization - Computer Model

    DTIC Science & Technology

    1980-04-01

    ble pwm .-ezts or- for aggregvationr of noltinle criteri an-k ordered reqoirenments for- procdzcts. priorities. o) Reducd length lists (dowcn to C...Quantities of 50 and 51 respectively were reduced one each, without loss of generalization , to permit model computation. 69 -A- TABLE 5. (CONcLUDED) Case 10...strived examples from the literature. The model then was and generally failed to i6nd aggregation methods that demonstrated for an extensive R & D

  5. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  6. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  7. A computational model of bleb formation

    PubMed Central

    Strychalski, Wanda; Guy, Robert D.

    2013-01-01

    Blebbing occurs when the cytoskeleton detaches from the cell membrane, resulting in the pressure-driven flow of cytosol towards the area of detachment and the local expansion of the cell membrane. Recent interest has focused on cells that use blebbing for migrating through 3D fibrous matrices. In particular, metastatic cancer cells have been shown to use blebs for motility. A dynamic computational model of the cell is presented that includes mechanics of and the interactions between the intracellular fluid, the actin cortex and the cell membrane. The computational model is used to explore the relative roles in bleb formation time of cytoplasmic viscosity and drag between the cortex and the cytosol. A regime of values for the drag coefficient and cytoplasmic viscosity values that match bleb formation timescales is presented. The model results are then used to predict the Darcy permeability and the volume fraction of the cortex. PMID:22294562

  8. Computational Modeling of Vortex Generators for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, R. V.

    2002-01-01

    In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.

  9. Concepts to accelerate water balance model computation

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Casper, Markus; Gemmar, Peter

    2010-05-01

    Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be

  10. Computational modeling of foveal target detection.

    PubMed

    Witus, Gary; Ellis, R Darin

    2003-01-01

    This paper presents the VDM2000, a computational model of target detection designed for use in military developmental test and evaluation settings. The model integrates research results from the fields of early vision, object recognition, and psychophysics. The VDM2000 is image based and provides a criterion-independent measure of target conspicuity, referred to as the vehicle metric (VM). A large data set of human responses to photographs of military vehicles in a field setting was used to validate the model. The VM adjusted by a single calibration parameter accounts for approximately 80% of the variance in the validation data. The primary application of this model is to predict detection of military targets in daylight with the unaided eye. The model also has application to target detection prediction using infrared night vision systems. The model has potential as a tool to evaluate the visual properties of more general task settings.

  11. Molecular Sieve Bench Testing and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  12. Computational Modeling of Pollution Transmission in Rivers

    NASA Astrophysics Data System (ADS)

    Parsaie, Abbas; Haghiabi, Amir Hamzeh

    2015-08-01

    Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.

  13. Computational continuum modeling of solder interconnects: Applications

    SciTech Connect

    Burchett, S.N.; Neilsen, M.K.; Frear, D.R.

    1997-04-01

    The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Fb alloy. This alloy has a number of processing advantages (suitable melting point of 183C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes the prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder, which is currently under development, was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of a mini ball grid array solder interconnect. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the thermomechanical response of a mini ball grid array solder interconnects then will be presented.

  14. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1993-01-01

    Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.

  15. Computational modeling of material aging effects

    SciTech Connect

    Fang, H.E.

    1996-07-01

    Progress is being made in our efforts to develop computational models for predicting material property changes in weapon components due to aging. The first version of a two-dimensional lattice code for modeling thermomechanical fatigue, such as has been observed in solder joints on electronic components removed from the stockpile, has been written and tested. The code does a good qualitative job of presenting intergranular and/or transgranular cracking in a polycrystalline material when under thermomechanical deformation. The current progress is an encouraging start for our long term effort to develop multi-level simulation capabilities, with the technology of high performance computing, for predicting age-related effects on the reliability of weapons.

  16. A computer model of auditory stream segregation.

    PubMed

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  17. Computer modeling and simulation of human movement.

    PubMed

    Pandy, M G

    2001-01-01

    Recent interest in using modeling and simulation to study movement is driven by the belief that this approach can provide insight into how the nervous system and muscles interact to produce coordinated motion of the body parts. With the computational resources available today, large-scale models of the body can be used to produce realistic simulations of movement that are an order of magnitude more complex than those produced just 10 years ago. This chapter reviews how the structure of the neuromusculoskeletal system is commonly represented in a multijoint model of movement, how modeling may be combined with optimization theory to simulate the dynamics of a motor task, and how model output can be analyzed to describe and explain muscle function. Some results obtained from simulations of jumping, pedaling, and walking are also reviewed to illustrate the approach.

  18. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  19. Wild Fire Computer Model Helps Firefighters

    SciTech Connect

    Canfield, Jesse

    2012-09-04

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  20. Wild Fire Computer Model Helps Firefighters

    ScienceCinema

    Canfield, Jesse

    2016-07-12

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  1. Computational Biology: Modeling Chronic Renal Allograft Injury

    PubMed Central

    Stegall, Mark D.; Borrows, Richard

    2015-01-01

    New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury. PMID:26284070

  2. AMAR: A Computational Model of Autosegmental Phonology

    DTIC Science & Technology

    1993-10-01

    the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t

  3. Computed structures of polyimides model compounds

    NASA Technical Reports Server (NTRS)

    Tai, H.; Phillips, D. H.

    1990-01-01

    Using a semi-empirical approach, a computer study was made of 8 model compounds of polyimides. The compounds represent subunits from which NASA Langley Research Center has successfully synthesized polymers for aerospace high performance material application, including one of the most promising, LARC-TPI polymer. Three-dimensional graphic display as well as important molecular structure data pertaining to these 8 compounds are obtained.

  4. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  5. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  6. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  7. Computational fire modeling for aircraft fire research

    SciTech Connect

    Nicolette, V.F.

    1996-11-01

    This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.

  8. Computational acoustic modeling of cetacean vocalizations

    NASA Astrophysics Data System (ADS)

    Gurevich, Michael Dixon

    A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it

  9. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  10. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  11. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  12. Computer model of in situ leaching hydrology

    SciTech Connect

    Not Available

    1981-05-01

    A computer program developed by the US Bureau of Mines simulates the hydrologic activity associated with in situ mining. Its purpose is to determine the site specific flow behavior of leachants and groundwater during development, production, and resotration phases of an in situ leaching operations. Model capabilities include arbitrary well patterns and pumping schedules, partially penetrating well screens, directionally anisotropic permeability and natural groundwater flow, in either leaky or nonleaky, confined aquifers and under steady state or time dependent flow conditions. In addition to extensive laboratory testing, the Twin Cites Research Center has closely monitored the application of this model at three different mine sites, and at each site, the solution breakthrough time and the hydraulic head at observation wells were used to tune the model. The model was then used satisfactorily to assess suitability of various well configurations and pumping schedules, in terms of fluid dispersion within the ore pod and fluid excursions into the surrounding aquifer. (JMT)

  13. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  14. A neural computational model of incentive salience.

    PubMed

    Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne

    2009-07-01

    Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating

  15. A Computational Model of Cerebral Cortex Folding

    PubMed Central

    Nie, Jingxin; Guo, Lei; Li, Gang; Faraco, Carlos; Miller, L Stephen; Liu, Tianming

    2010-01-01

    The geometric complexity and variability of the human cerebral cortex has long intrigued the scientific community. As a result, quantitative description of cortical folding patterns and the understanding of underlying folding mechanisms have emerged as important research goals. This paper presents a computational 3-dimensional geometric model of cerebral cortex folding initialized by MRI data of a human fetal brain and deformed under the governance of a partial differential equation modeling cortical growth. By applying different simulation parameters, our model is able to generate folding convolutions and shape dynamics of the cerebral cortex. The simulations of this 3D geometric model provide computational experimental support to the following hypotheses: 1) Mechanical constraints of the skull regulate the cortical folding process. 2) The cortical folding pattern is dependent on the global cell growth rate of the whole cortex. 3) The cortical folding pattern is dependent on relative rates of cell growth in different cortical areas. 4) The cortical folding pattern is dependent on the initial geometry of the cortex. PMID:20167224

  16. Computer model of tetrahedral amorphous diamond

    NASA Astrophysics Data System (ADS)

    Djordjević, B. R.; Thorpe, M. F.; Wooten, F.

    1995-08-01

    We computer generate a model of amorphous diamond using the Wooten-Weaire method, with fourfold coordination everywhere. We investigate two models: one where four-membered rings are allowed and the other where the four-membered rings are forbidden; each model consisting of 4096 atoms. Starting from the perfect diamond crystalline structure, we first randomize the structure by introducing disorder through random bond switches at a sufficiently high temperature. Subsequently, the temperature is reduced in stages, and the topological and geometrical relaxation of the structure takes place using the Keating potential. After a long annealing process, a random network of comparatively low energy is obtained. We calculate the pair distribution function, mean bond angle, rms angular deviation, rms bond length, rms bond-length deviation, and ring statistics for the final relaxed structures. We minimize the total strain energy by adjusting the density of the sample. We compare our results with similar computer-generated models for amorphous silicon, and with experimental measurement of the structure factor for (predominantly tetrahedral) amorphous carbon.

  17. Computational Model of Fluorine-20 Experiment

    NASA Astrophysics Data System (ADS)

    Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne

    2015-10-01

    The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.

  18. Computational Modeling of Distal Protection Filters

    PubMed Central

    Siewiorek, Gail M.; Finol, Ender A.

    2010-01-01

    Purpose: To quantify the relationship between velocity and pressure gradient in a distal protection filter (DPF) and to determine the feasibility of modeling a DPF as a permeable surface using computational fluid dynamics (CFD). Methods: Four DPFs (Spider RX, FilterWire EZ, RX Accunet, and Emboshield) were deployed in a single tube representing the internal carotid artery (ICA) in an in vitro flow apparatus. Steady flow of a blood-like solution was circulated with a peristaltic pump and compliance chamber. The flow rate through each DPF was measured at physiological pressure gradients, and permeability was calculated using Darcy's equation. Two computational models representing the RX Accunet were created: an actual representation of the filter geometry and a circular permeable surface. The permeability of RX Accunet was assigned to the surface, and CFD simulations were conducted with both models using experimentally derived boundary conditions. Results: Spider RX had the largest permeability while RX Accunet was the least permeable filter. CFD modeling of RX Accunet and the permeable surface resulted in excellent agreement with the experimental measurements of velocity and pressure gradient. However, the permeable surface model did not accurately reproduce local flow patterns near the DPF deployment site. Conclusion: CFD can be used to model DPFs, yielding global flow parameters measured with bench-top experiments. CFD models of the detailed DPF geometry could be used for “virtual testing” of device designs under simulated flow conditions, which would have potential benefits in decreasing the number of design iterations leading up to in vivo testing. PMID:21142490

  19. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  20. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  1. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  2. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  3. Computer model for analyzing sodium cold traps

    SciTech Connect

    McPheeters, C C; Raue, D J

    1983-05-01

    A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions. The calculated pressure drop as a function of impurity mass content determines the capacity of the cold trap. The accuracy of the model was checked by comparing calculated mass distributions with experimentally determined mass distributions from literature publications and with results from our own cold trap experiments. The comparisons were excellent in all cases. A parametric study was performed to determine which design variables are most important in maximizing cold trap capacity.

  4. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  5. Computational modeling of Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-12-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  6. Computer Model Used to Help Customize Medicine

    NASA Technical Reports Server (NTRS)

    Stauber, Laurel J.; Veris, Jenise

    2001-01-01

    Dr. Radhakrishnan, a researcher at the NASA Glenn Research Center, in collaboration with biomedical researchers at the Case Western Reserve University School of Medicine and Rainbow Babies and Children's Hospital, is developing computational models of human physiology that quantitate metabolism and its regulation, in both healthy and pathological states. These models can help predict the effects of stresses or interventions, such as drug therapies, and contribute to the development of customized medicine. Customized medical treatment protocols can give more comprehensive evaluations and lead to more specific and effective treatments for patients, reducing treatment time and cost. Commercial applications of this research may help the pharmaceutical industry identify therapeutic needs and predict drug-drug interactions. Researchers will be able to study human metabolic reactions to particular treatments while in different environments as well as establish more definite blood metabolite concentration ranges in normal and pathological states. These computational models may help NASA provide the background for developing strategies to monitor and safeguard the health of astronauts and civilians in space stations and colonies. They may also help to develop countermeasures that ameliorate the effects of both acute and chronic space exposure.

  7. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  8. Model-based neuroimaging for cognitive computing.

    PubMed

    Poznanski, Roman R

    2009-09-01

    The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.

  9. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  10. Computational modeling of the amphibian thyroid axis ...

    EPA Pesticide Factsheets

    In vitro screening of chemicals for bioactivity together with computational modeling are beginning to replace animal toxicity testing in support of chemical risk assessment. To facilitate this transition, an amphibian thyroid axis model has been developed to describe thyroid homeostasis during Xenopus laevis pro-metamorphosis. The model simulates the dynamic relationships of normal thyroid biology throughout this critical period of amphibian development and includes molecular initiating events (MIEs) for thyroid axis disruption to allow in silico simulations of hormone levels following chemical perturbations. One MIE that has been formally described using the adverse outcome pathway (AOP) framework is thyroperoxidase (TPO) inhibition. The goal of this study was to refine the model parameters and validate model predictions by generating dose-response and time-course biochemical data following exposure to three TPO inhibitors, methimazole, 6-propylthiouracil and 2-mercaptobenzothiazole. Key model variables including gland and blood thyroid hormone (TH) levels were compared to empirical values measured in biological samples at 2, 4, 7 and 10 days following initiation of exposure at Nieuwkoop and Faber (NF) stage 54 (onset of pro-metamorphosis). The secondary objective of these studies was to relate depleted blood TH levels to delayed metamorphosis, the adverse apical outcome. Delayed metamorphosis was evaluated by continuing exposure with a subset of larvae until a

  11. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  12. Teaching 1H NMR Spectrometry Using Computer Modeling.

    ERIC Educational Resources Information Center

    Habata, Yoichi; Akabori, Sadatoshi

    2001-01-01

    Molecular modeling by computer is used to display stereochemistry, molecular orbitals, structure of transition states, and progress of reactions. Describes new ideas for teaching 1H NMR spectroscopy using computer modeling. (Contains 12 references.) (ASK)

  13. Computational Model Builder for Multi-Dimensional Models

    DTIC Science & Technology

    2015-08-12

    In addition tools to support both the Environmental Simulation Effort (ES) and the Engineered Resilient Systems (ERS). Various ParaView extensions...support ERDC’s Environmental Simulation (ES) effort as well as ERDC’s Engineered Resilient Systems (ERS) effort. Computational Model Builder Suite...Distribution Statement A: Approved for public release; distribution unlimited. UNCLASSIFIED Engineered Resilient Systems (ERS) Support

  14. Computational models of intergroup competition and warfare.

    SciTech Connect

    Letendre, Kenneth; Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  15. Computer modeling of thermoelectric generator performance

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  16. Electromagnetic physics models for parallel computing architectures

    SciTech Connect

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.

  17. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  18. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  19. Computational continuum modeling of solder interconnects

    SciTech Connect

    Burchett, S.N.; Neilsen, M.K.; Frear, D.R.; Stephens, J.J.

    1997-03-01

    The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Pb alloy. This alloy has a number of processing advantages (suitable melting point of 183 C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder which is currently in development was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of various leadless chip carrier solder interconnects to determine the effects of variations in geometry and loading. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the effect of geometry and loading variations on leadless chip carrier solder interconnects then will be presented.

  20. Radiative cooling computed for model atmospheres

    NASA Astrophysics Data System (ADS)

    Eriksson, T. S.; Granqvist, C. G.

    1982-12-01

    The radiative cooling power and temperature drop of horizontal surfaces are evaluated on the basis of calculations of spectral radiance from model atmospheres representative of various climatic conditions. Calculations of atmospheric radiance from the zenith and from off-zenith angles were performed with the LOWTRAN 5 atmospheric transmittance/radiance computer code (Kneizys et al., 1980) for model atmospheres corresponding to the tropics, midlatitude summer, midlatitude winter, subarctic summer, subarctic winter and the 1962 U.S. standard atmosphere. Comparison of the computed spectral radiance curves with the radiative fluxes from blackbody surfaces and ideal infrared-selective surfaces (having reflectance in the 8-13 micron range and unity reflectance elsewhere) at various ambient-surface temperature differences shows cooling powers to lie between 58 and 113 W/sq m at ambient temperature for a freely radiating surface, with maximum temperature differences of 11-21 C for a blackbody and 18-33 C for an infrared-selective surface. Both cooling powers and temperature differences were higher for surfaces exposed only to atmospheric zenith radiance. In addition, water vapor content is found to affect strongly the radiative cooling, while ozone and aerosol contents had little effect.

  1. Multiscale computational modelling of the heart

    NASA Astrophysics Data System (ADS)

    Smith, N. P.; Nickerson, D. P.; Crampin, E. J.; Hunter, P. J.

    A computational framework is presented for integrating the electrical, mechanical and biochemical functions of the heart. Finite element techniques are used to solve the large-deformation soft tissue mechanics using orthotropic constitutive laws based in the measured fibre-sheet structure of myocardial (heart muscle) tissue. The reaction-diffusion equations governing electrical current flow in the heart are solved on a grid of deforming material points which access systems of ODEs representing the cellular processes underlying the cardiac action potential. Navier-Stokes equations are solved for coronary blood flow in a system of branching blood vessels embedded in the deforming myocardium and the delivery of oxygen and metabolites is coupled to the energy-dependent cellular processes. The framework presented here for modelling coupled physical conservation laws at the tissue and organ levels is also appropriate for other organ systems in the body and we briefly discuss applications to the lungs and the musculo-skeletal system. The computational framework is also designed to reach down to subcellular processes, including signal transduction cascades and metabolic pathways as well as ion channel electrophysiology, and we discuss the development of ontologies and markup language standards that will help link the tissue and organ level models to the vast array of gene and protein data that are now available in web-accessible databases.

  2. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  3. Computational Modeling and Simulation of Genital Tubercle ...

    EPA Pesticide Factsheets

    Hypospadias is a developmental defect of urethral tube closure that has a complex etiology. Here, we describe a multicellular agent-based model of genital tubercle development that simulates urethrogenesis from the urethral plate stage to urethral tube closure in differentiating male embryos. The model, constructed in CompuCell3D, implemented spatially dynamic signals from SHH, FGF10, and androgen signaling pathways. These signals modulated stochastic cell behaviors, such as differential adhesion, cell motility, proliferation, and apoptosis. Urethral tube closure was an emergent property of the model that was quantitatively dependent on SHH and FGF10 induced effects on mesenchymal proliferation and endodermal apoptosis, ultimately linked to androgen signaling. In the absence of androgenization, simulated genital tubercle development defaulted to the female condition. Intermediate phenotypes associated with partial androgen deficiency resulted in incomplete closure. Using this computer model, complex relationships between urethral tube closure defects and disruption of underlying signaling pathways could be probed theoretically in multiplex disturbance scenarios and modeled into probabilistic predictions for individual risk for hypospadias and potentially other developmental defects of the male genital tubercle. We identify the minimal molecular network that determines the outcome of male genital tubercle development in mice.

  4. Computer Modeling of Non-Isothermal Crystallization

    NASA Technical Reports Server (NTRS)

    Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.

    1996-01-01

    A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.

  5. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  6. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  7. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  8. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-12-18

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  9. Continuum and computational modeling of flexoelectricity

    NASA Astrophysics Data System (ADS)

    Mao, Sheng

    Flexoelectricity refers to the linear coupling of strain gradient and electric polarization. Early studies of this subject mostly look at liquid crystals and biomembranes. Recently, the advent of nanotechnology revealed its importance also in solid structures, such as flexible electronics, thin films, energy harvesters, etc. The energy storage function of a flexoelectric solid depends not only on polarization and strain, but also strain-gradient. This is our basis to formulate a consistent model of flexoelectric solids under small deformation. We derive a higher-order Navier equation for linear isotropic flexoelectric materials which resembles that of Mindlin in gradient elasticity. Closed-form solutions can be obtained for problems such as beam bending, pressurized tube, etc. Flexoelectric coupling can be enhanced in the vicinity of defects due to strong gradients and decay away in far field. We quantify this expectation by computing elastic and electric fields near different types of defects in flexoelectric solids. For point defects, we recover some well-known results of non-local theories. For dislocations, we make connections with experimental results on NaCl, ice, etc. For cracks, we perform a crack-tip asymptotic analysis and the results share features from gradient elasticity and piezoelectricity. We compute the J integral and use it for determining fracture criteria. Conventional finite element methods formulated solely on displacement are inadequate to treat flexoelectric solids due to higher order governing equations. Therefore, we introduce a mixed formulation which uses displacement and displacement-gradient as separate variables. Their known relation is constrained in a weighted integral sense. We derive a variational formulation for boundary value problems for piezeo- and/or flexoelectric solids. We validate this computational framework against exact solutions. With this method more complex problems, including a plate with an elliptical hole

  10. Preliminary Phase Field Computational Model Development

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  11. Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Cummings, P. T.

    2010-02-08

    This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

  12. A computer model of the earth's magnetosphere

    NASA Astrophysics Data System (ADS)

    Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha

    1988-03-01

    The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.

  13. A computer model of the earth's magnetosphere

    NASA Technical Reports Server (NTRS)

    Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha

    1988-01-01

    The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.

  14. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  15. Comprehensive silicon solar cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.

  16. Modeling groundwater flow on massively parallel computers

    SciTech Connect

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  17. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1992-01-01

    The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.

  18. Computational model of heterogeneous heating in melanin

    NASA Astrophysics Data System (ADS)

    Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.

    2015-03-01

    Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.

  19. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  20. Gravothermal Star Clusters - Theory and Computer Modelling

    NASA Astrophysics Data System (ADS)

    Spurzem, Rainer

    2010-11-01

    In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.

  1. Climate Change Modeling:Computational Opportunities and Challenges

    SciTech Connect

    Wang, Dali; Post, Wilfred M; Wilson, Bruce E

    2011-01-01

    High- delity climate models are the workhorses of modern climate change sciences. In this article, the authors focus on several computational issues associated with climate change modeling, covering simulation methodologies, temporal and spatial modeling restrictions, the role of high-end computing, as well as the importance of data-driven regional climate impact modeling.

  2. Computational Modeling of Uranium Hydriding and Complexes

    SciTech Connect

    Balasubramanian, K; Siekhaus, W J; McLean, W

    2003-02-03

    et al. have studied U-hydriding in ultrahigh vacuum and obtained the linear rate data over a wide range of temperatures and pressures. They found reversible hydrogen sorption on the UH{sub 3} reaction product from kinetic effects at 21 C. This demonstrates restarting of the hydriding process in the presence of UH{sub 3} reaction product. DeMint and Leckey have shown that Si impurities dramatically accelerate the U-hydriding rates. We report our recent results of relativistic computations that vary from complete active space multi-configuration interaction (CAS-MCSCF) followed by multi-reference configuration interaction (MRSDCI) computations that included up to 50 million configurations for modeling of uranium-hydriding with cluster models will be presented.

  3. Computer modeling of complete IC fabrication process

    NASA Astrophysics Data System (ADS)

    Dutton, Robert W.

    1987-05-01

    The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.

  4. Computational modeling of acute myocardial infarction

    PubMed Central

    Sáez, P.; Kuhl, E.

    2015-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step towards simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size. PMID:26583449

  5. Computational modeling of acute myocardial infarction.

    PubMed

    Sáez, P; Kuhl, E

    2016-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size.

  6. Computer modelling of metal - oxide interfaces

    NASA Astrophysics Data System (ADS)

    Purton, J.; Parker, S. C.; Bullett, D. W.

    1997-07-01

    We have used atomistic simulations to model oxide - metal interfaces. We have, for the first time, allowed the atoms on both sides of the interface to relax. The efficiency of the computational method means that calculations can be performed on complex interfaces containing several thousand atoms and do not require an arbitrary definition of the image plane to model the electrostatics across the dielectric discontinuity. We demonstrate the viability of the approach and the effect of relaxation on a range of MgO - Ag interfaces. Defective and faceted interfaces, as well as the ideal case, have been studied. The latter was chosen for comparison with previous theoretical calculations and experimental results. The wetting angle 0953-8984/9/27/004/img7 and work of adhesion 0953-8984/9/27/004/img8 for MgO{100} - Ag{100} are in reasonable agreement with experiment. As with ab initio electronic structure calculations the silver atoms have been shown to favour the position above the oxygen site.

  7. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  8. Computational modeling of composite material fires.

    SciTech Connect

    Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.

    2010-10-01

    condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.

  9. Computational and Modeling Strategies for Cell Motility

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  10. Computational modeling of solid oxide fuel cell

    NASA Astrophysics Data System (ADS)

    Penmetsa, Satish Kumar

    In the ongoing search for alternative and environmentally friendly power generation facilities, the solid oxide fuel cell (SOFC) is considered one of the prime candidates for the next generation of energy conversion devices due to its capability to provide environmentally friendly and highly efficient power generation. Moreover, SOFCs are less sensitive to composition of fuel as compared to other types of fuel cells, and internal reforming of the hydrocarbon fuel cell can be performed because of higher operating temperature range of 700°C--1000°C. This allows us to use different types of hydrocarbon fuels in SOFCs. The objective of this study is to develop a three-dimensional computational model for the simulation of a solid oxide fuel cell unit to analyze the complex internal transport mechanisms and sensitivity of the cell with different operating conditions, and also to develop SOFC with higher operating current density with a more uniform gas distributions in the electrodes and with lower ohmic losses. This model includes mass transfer processes due to convection and diffusion in the gas flow channels based on the Navier-Stokes equations as well as combined diffusion and advection in electrodes using Brinkman's hydrodynamic equation and associated electrochemical reactions in the trilayer of the SOFC. Gas transport characteristics in terms of three-dimensional spatial distributions of reactant gases and their effects on electrochemical reactions at the electrode-electrolyte interface, and in the resulting polarizations, are evaluated for varying pressure conditions. Results show the significance of the Brinkman's hydrodynamic model in electrodes to achieve more uniform gas concentration distributions while using a higher operating pressure and over a higher range of operating current densities.

  11. Computer formulations of aircraft models for simulation studies

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1979-01-01

    Recent developments in formula manipulation compilers and the design of several symbol manipulation languages, enable computers to be used for symbolic mathematical computation. A computer system and language that can be used to perform symbolic manipulations in an interactive mode are used to formulate a mathematical model of an aeronautical system. The example demonstrates that once the procedure is established, the formulation and modification of models for simulation studies can be reduced to a series of routine computer operations.

  12. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  13. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  14. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  15. Precise orbit computation and sea surface modeling

    NASA Technical Reports Server (NTRS)

    Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.

    1991-01-01

    The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.

  16. Dynamical Properties of Polymers: Computational Modeling

    SciTech Connect

    CURRO, JOHN G.; ROTTACH, DANA; MCCOY, JOHN D.

    2001-01-01

    The free volume distribution has been a qualitatively useful concept by which dynamical properties of polymers, such as the penetrant diffusion constant, viscosity, and glass transition temperature, could be correlated with static properties. In an effort to put this on a more quantitative footing, we define the free volume distribution as the probability of finding a spherical cavity of radius R in a polymer liquid. This is identical to the insertion probability in scaled particle theory, and is related to the chemical potential of hard spheres of radius R in a polymer in the Henry's law limit. We used the Polymer Reference Interaction Site Model (PRISM) theory to compute the free volume distribution of semiflexible polymer melts as a function of chain stiffness. Good agreement was found with the corresponding free volume distributions obtained from MD simulations. Surprisingly, the free volume distribution was insensitive to the chain stiffness, even though the single chain structure and the intermolecular pair correlation functions showed a strong dependence on chain stiffness. We also calculated the free volume distributions of polyisobutylene (PIB) and polyethylene (PE) at 298K and at elevated temperatures from PRISM theory. We found that PIB has more of its free volume distributed in smaller size cavities than for PE at the same temperature.

  17. Computational modeling of ion transport through nanopores.

    PubMed

    Modi, Niraj; Winterhalter, Mathias; Kleinekathöfer, Ulrich

    2012-10-21

    Nanoscale pores are ubiquitous in biological systems while artificial nanopores are being fabricated for an increasing number of applications. Biological pores are responsible for the transport of various ions and substrates between the different compartments of biological systems separated by membranes while artificial pores are aimed at emulating such transport properties. As an experimental method, electrophysiology has proven to be an important nano-analytical tool for the study of substrate transport through nanopores utilizing ion current measurements as a probe for the detection. Independent of the pore type, i.e., biological or synthetic, and objective of the study, i.e., to model cellular processes of ion transport or electrophysiological experiments, it has become increasingly important to understand the dynamics of ions in nanoscale confinements. To this end, numerical simulations have established themselves as an indispensable tool to decipher ion transport processes through biological as well as artificial nanopores. This article provides an overview of different theoretical and computational methods to study ion transport in general and to calculate ion conductance in particular. Potential new improvements in the existing methods and their applications are highlighted wherever applicable. Moreover, representative examples are given describing the ion transport through biological and synthetic nanopores as well as the high selectivity of ion channels. Special emphasis is placed on the usage of molecular dynamics simulations which already have demonstrated their potential to unravel ion transport properties at an atomic level.

  18. Computational modeling of epidural cortical stimulation

    NASA Astrophysics Data System (ADS)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2008-12-01

    Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.

  19. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  20. Review of computational thermal-hydraulic modeling

    SciTech Connect

    Keefer, R.H.; Keeton, L.W.

    1995-12-31

    Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.

  1. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  2. A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC

    NASA Astrophysics Data System (ADS)

    Ajiro, Takashi; Tsuchida, Kensei

    A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.

  3. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    PubMed

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  4. Learning Anatomy: Do New Computer Models Improve Spatial Understanding?

    ERIC Educational Resources Information Center

    Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian

    1999-01-01

    Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)

  5. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... codes are free of coding errors and produce stable solutions; (v) Conceptual models have undergone...

  6. Graph Partitioning Models for Parallel Computing

    SciTech Connect

    Hendrickson, B.; Kolda, T.G.

    1999-03-02

    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  7. Performance Models for Split-execution Computing Systems

    SciTech Connect

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  8. Computational Modeling of Magnetically Actuated Propellant Orientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.

    1996-01-01

    sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.

  9. Model for personal computer system selection.

    PubMed

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  10. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  11. Computer modeling of a convective steam superheater

    NASA Astrophysics Data System (ADS)

    Trojan, Marcin

    2015-03-01

    Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.

  12. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  13. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways.

  14. Evaluation of aerothermal modeling computer programs

    NASA Technical Reports Server (NTRS)

    Hsieh, K. C.; Yu, S. T.

    1987-01-01

    Various computer programs based upon the SIMPLE or SIMPLER algorithm were studied and compared for numerical accuracy, efficiency, and grid dependency. Four two-dimensional and one three-dimensional code originally developed by a number of research groups were considered. In general, the accuracy and computational efficieny of these TEACH type programs were improved by modifying the differencing schemes and their solvers. A brief description of each program is given. Error reduction, spline flux and second upwind differencing programs are covered.

  15. Dissemination of computer skills among physicians: the infectious process model.

    PubMed

    Quinn, F B; Hokanson, J A; McCracken, M M; Stiernberg, C M

    1984-08-01

    While the potential utility of computer technology to medicine is often acknowledged, little is known as to the best methods to actually teach physicians about computers. The current variability in physician computer fluency implies there is no accepted minimum required level of computer skills for physicians. Special techniques are needed to instill these skills in the physician and measure their effects within the medical profession. This hypothesis is suggested following the development of a specialized course for the new physician. In a population of physicians where medical computing usage was considered nonexistent, intense interest developed the following exposure to a role model having strong credentials in both medicine and computer science. This produced an atmosphere where there was a perceived benefit in being knowledgeable about the medical computer usage. The subsequent increase in computer systems use was the result of the availability of resources and development of computer skills that could be exchanged among the students and faculty. This growth in computer use is described using the parameters of an infectious process model. While other approaches may also be useful, the infectious process model permits the growth of medical computer usage to be quantitatively described, evaluates specific determinants of use patterns, and allows the future growth of computer utilization in medicine to be predicted.

  16. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  17. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Jerzy Bernholc

    2011-02-03

    will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.

  18. Computer Center: BASIC String Models of Genetic Information Transfer.

    ERIC Educational Resources Information Center

    Spain, James D., Ed.

    1984-01-01

    Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)

  19. Ocean Modeling and Visualization on Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.

    1997-01-01

    Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.

  20. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  1. Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling

    ERIC Educational Resources Information Center

    Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao

    2013-01-01

    Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…

  2. A Model for Guiding Undergraduates to Success in Computational Science

    ERIC Educational Resources Information Center

    Olagunju, Amos O.; Fisher, Paul; Adeyeye, John

    2007-01-01

    This paper presents a model for guiding undergraduates to success in computational science. A set of integrated, interdisciplinary training and research activities is outlined for use as a vehicle to increase and produce graduates with research experiences in computational and mathematical sciences. The model is responsive to the development of…

  3. Overview of ASC Capability Computing System Governance Model

    SciTech Connect

    Doebling, Scott W.

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  4. Generate rigorous pyrolysis models for olefins production by computer

    SciTech Connect

    Klein, M.T.; Broadbelt, L.J.; Grittman, D.H.

    1997-04-01

    With recent advances in the automation of the model-building process for large networks of kinetic equations, it may become feasible to generate computer pyrolysis models for naphthas and gas oil feedstocks. The potential benefit of a rigorous mechanistic model for these relatively complex liquid feedstocks is great, due to diverse characterizations and yield spectrums. An ethane pyrolysis example is used to illustrate the computer generation of reaction mechanism models.

  5. A computational model of the human hand 93-ERI-053

    SciTech Connect

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  6. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  7. Bringing computational models of bone regeneration to the clinic.

    PubMed

    Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans

    2015-01-01

    Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs.

  8. Computer Modeling and Research in the Classroom

    ERIC Educational Resources Information Center

    Ramos, Maria Joao; Fernandes, Pedro Alexandrino

    2005-01-01

    We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…

  9. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  10. Computer modeling of ORNL storage tank sludge mobilization and mixing

    SciTech Connect

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  11. Implementing and assessing computational modeling in introductory mechanics

    NASA Astrophysics Data System (ADS)

    Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.

    2012-12-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated with a proctored assignment involving a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation, and the implications for computational instruction in introductory science, technology, engineering, and mathematics (STEM) courses.

  12. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    2006-09-01

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  13. Limits on the Power of Some Models of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel

    We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.

  14. Computer Aided Modeling and Post Processing with NASTRAN Analysis

    NASA Technical Reports Server (NTRS)

    Boroughs, R. R.

    1984-01-01

    Computer aided engineering systems are invaluable tools in performing NASTRAN finite element analysis. These techniques are implemented in both the pre-processing and post-processing phases of the NASTRAN analysis. The finite element model development, or pre-processing phase, was automated with a computer aided modeling program called Supertabl, and the review and interpretation of the results of the NASTRAN analysis, or post-processing phase, was automated with a computer aided plotting program called Output Display. An intermediate program, Nasplot, which was developed in-house, has also helped to cut down on the model checkout time and reduce errors in the model. An interface has been established between the finite element computer aided engineering system and the Learjet computer aided design system whereby data can be transferred back and forth between the two. These systems have significantly improved productivity and the ability to perform NASTRAN analysis in response to product development requests.

  15. Operation of the computer model for microenvironment atomic oxygen exposure

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  16. Ground Motion Models and Computer Techniques

    DTIC Science & Technology

    1972-04-01

    tectonic stress-strain distributions induced by changing the pore witer pressure. A general computer subroutine (TAMEOS) is described which...interactions, material phase changes , and dependence of strength parameters on the thermodynamic state. This report describes improved techniques...stretches, 1^ = An A? (3.11) ij = In X? (3.12) we find X. = X? X? . (3.13) 55 It is shown in Ref. 24 that the rate of change of compression, ö

  17. Computer Modeling for Optical Waveguide Sensors.

    DTIC Science & Technology

    1987-12-15

    COSATI CODES 18 SUBJECT TERMS (Continue on reverse it necessary and cleritify by DIock numnerl FIEL GRUP SB-GOUP Optical waveguide sensors Computer...reflection. The resultant probe beam transmission may be plotted as a function of changes in the refractive index of the surrounding fluid medium. BASIC...all angles of incidence about the critical angle ecr. It should be noted that N in equation (3) is a function of e, since = sin - l sin 8 , see

  18. A Computational Dual-Process Model of Social Interaction

    DTIC Science & Technology

    2014-01-30

    to facilitate the building of the computational models of agents, visualized as avatars , which pursue goals that drive their behaviors in social...employed for over 20 years. OMAR was used to facilitate the building of the computational models in which the agents, visualized as avatars , pursue the...overview of the visualization of the scenarios’ human performance models as avatars that portray the social interactions of the individuals involved. 3

  19. A stirling engine computer model for performance calculations

    NASA Technical Reports Server (NTRS)

    Tew, R.; Jefferies, K.; Miao, D.

    1978-01-01

    To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.

  20. Multiphase Turbulence Modeling for Computational Ship Hydrodynamics

    DTIC Science & Technology

    2014-05-30

    to the SGS model as bubbles become under-resolved, passing through the numerical Hinze scale. 3 iii. URANS closure modeling by analysis of the...variable density turbulence) for URANS models have been developed and tested a priori for turbulent mass flux and kinetic energy. The iLES...well as established the importance of turbulent mass flux and anisotropy in the wake that has guided the development of URANS closure models. This

  1. Student Models in Computer-Aided Instruction

    ERIC Educational Resources Information Center

    Self, J. A.

    1974-01-01

    A proposed student model consisting of a set of programs to represent the student's knowledge state. Teaching proceeds after a comparative evaluation of student and teacher programs, and learning is represented by direct modification of the student model. The advantages of an explicit procedural model are illustrated by considering a program which…

  2. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-04-30

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  3. Computational technology of multiscale modeling the gas flows in microchannels

    NASA Astrophysics Data System (ADS)

    Podryga, V. O.

    2016-11-01

    The work is devoted to modeling the gas mixture flows in engineering microchannels under conditions of many scales of computational domain. The computational technology of using the multiscale approach combining macro - and microscopic models is presented. At macrolevel the nature of the flow and the external influence on it are considered. As a model the system of quasigasdynamic equations is selected. At microlevel the correction of gasdynamic parameters and the determination of boundary conditions are made. As a numerical model the Newton's equations and the molecular dynamics method are selected. Different algorithm types used for implementation of multiscale modeling are considered. The results of the model problems for separate stages are given.

  4. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  5. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    PubMed

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  6. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    NASA Astrophysics Data System (ADS)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  7. Dynamic Stall Computations Using a Zonal Navier-Stokes Model

    DTIC Science & Technology

    1988-06-01

    COMPUTATIONS USING A ZONAL NAVIER-STOKES MODEL OfOSONA, AUTWOR(S) Conrovd, Jack H. r. __ _ I, ,3 , iOR co T’M( COVERED DATE Of REPORT (Yea, Month Oy) IS PAGE...48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface pressure distributions and integrated...lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational

  8. Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.

    2011-01-01

    This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.

  9. Multiscale Modeling in Computational Biomechanics: Determining Computational Priorities and Addressing Current Challenges

    SciTech Connect

    Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.; Erdemir, Ahmet; Guess, Trent; Reinbolt, Jeff

    2009-05-01

    Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.

  10. Modeling Trait Anxiety: From Computational Processes to Personality

    PubMed Central

    Raymond, James G.; Steele, J. Douglas; Seriès, Peggy

    2017-01-01

    Computational methods are increasingly being applied to the study of psychiatric disorders. Often, this involves fitting models to the behavior of individuals with subclinical character traits that are known vulnerability factors for the development of psychiatric conditions. Anxiety disorders can be examined with reference to the behavior of individuals high in “trait” anxiety, which is a known vulnerability factor for the development of anxiety and mood disorders. However, it is not clear how this self-report measure relates to neural and behavioral processes captured by computational models. This paper reviews emerging computational approaches to the study of trait anxiety, specifying how interacting processes susceptible to analysis using computational models could drive a tendency to experience frequent anxious states and promote vulnerability to the development of clinical disorders. Existing computational studies are described in the light of this perspective and appropriate targets for future studies are discussed. PMID:28167920

  11. Super-Micro Computer Weather Prediction Model

    DTIC Science & Technology

    1990-06-01

    model equations 2 b. Grid domain and horizontal nesting 5 c. Time integration and outer lateral boundary condition 8 d. Coupling of the model with the...c. Eddy diffusion sensitivity tests 36 4. Domain for Prototype testing 39 5 . Comparison of the Boundary-Layer Parameterizations - -__ With the...including radiation calculations, with other boundary layer work will be presented in section 5 , and the report concludes witb section 6. 2. Model

  12. Computational modeling in cognitive science: a manifesto for change.

    PubMed

    Addyman, Caspar; French, Robert M

    2012-07-01

    Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces.  For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals.

  13. A model for computing at the SSC (Superconducting Super Collider)

    SciTech Connect

    Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)

    1990-06-01

    High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.

  14. Computer Mediated Social Justice: A New Model for Educators.

    ERIC Educational Resources Information Center

    Tettegah, Sharon

    2002-01-01

    Introduces a new model for analyzing teachers' conversations in computer-mediate communication (CMC) based on information from Bakhtin (1981), Freire (1993), social identity theory, psychological capital, cultural consciousness, and CMC theoretical frameworks. Considers CMC and human-computer interaction (HCI) to address cultural differences that…

  15. Computer Simulation (Microcultures): An Effective Model for Multicultural Education.

    ERIC Educational Resources Information Center

    Nelson, Jorge O.

    This paper presents a rationale for using high-fidelity computer simulation in planning for and implementing effective multicultural education strategies. Using computer simulation, educators can begin to understand and plan for the concept of cultural sensitivity in delivering instruction. The model promises to emphasize teachers' understanding…

  16. Several Computational Opportunities and Challenges Associated with Climate Change Modeling

    SciTech Connect

    Wang, Dali; Post, Wilfred M; Wilson, Bruce E

    2010-01-01

    One of the key factors in the improved understanding of climate science is the development and improvement of high fidelity climate models. These models are critical for projections of future climate scenarios, as well as for highlighting the areas where further measurement and experimentation are needed for knowledge improvement. In this paper, we focus on several computing issues associated with climate change modeling. First, we review a fully coupled global simulation and a nested regional climate model to demonstrate key design components, and then we explain the underlying restrictions associated with the temporal and spatial scale for climate change modeling. We then discuss the role of high-end computers in climate change sciences. Finally, we explain the importance of fostering regional, integrated climate impact analysis. Although we discuss the computational challenges associated with climate change modeling, and we hope those considerations can also be beneficial to many other modeling research programs involving multiscale system dynamics.

  17. Computational Morphodynamics: A modeling framework to understand plant growth

    PubMed Central

    Chickarmane, Vijay; Roeder, Adrienne H.K.; Tarr, Paul T.; Cunha, Alexandre; Tobin, Cory; Meyerowitz, Elliot M.

    2014-01-01

    Computational morphodynamics utilizes computer modeling to understand the development of living organisms over space and time. Results from biological experiments are used to construct accurate and predictive models of growth. These models are then used to make novel predictions providing further insight into the processes in question, which can be tested experimentally to either confirm or rule out the validity of the computational models. This review highlights two fundamental issues: (1.) models should span and integrate single cell behavior with tissue development and (2.) the necessity to understand the feedback between mechanics of growth and chemical or molecular signaling. We review different approaches to model plant growth and discuss a variety of model types that can be implemented, with the aim of demonstrating how this methodology can be used, to explore the morphodynamics of plant development. PMID:20192756

  18. Cancer evolution: mathematical models and computational inference.

    PubMed

    Beerenwinkel, Niko; Schwarz, Roland F; Gerstung, Moritz; Markowetz, Florian

    2015-01-01

    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.

  19. Computer modeling of tactical high frequency antennas

    NASA Astrophysics Data System (ADS)

    Gregory, Bobby G., Jr.

    1992-06-01

    The purpose of this thesis was to compare the performance of three tactical high frequency antennas to be used as possible replacement for the Tactical Data Communications Central (TDCC) antennas. The antennas were modeled using the Numerical Electromagnetics Code, Version 3 (NEC3), and the Eyring Low Profile and Buried Antenna Modeling Program (PAT7) for several different frequencies and ground conditions. The performance was evaluated by comparing gain at the desired takeoff angles, the voltage standing wave ratio of each antenna, and its omni-directional capability. The buried antenna models, the ELPA-302 and horizontal dipole, were most effective when employed over poor ground conditions. The best performance under all conditions tested was demonstrated by the HT-20T. Each of these antennas have tactical advantages and disadvantages and can optimize communications under certain conditions. The selection of the best antenna is situation dependent. An experimental test of these models is recommended to verify the modeling results.

  20. Cancer Evolution: Mathematical Models and Computational Inference

    PubMed Central

    Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian

    2015-01-01

    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804

  1. Computational model for Halorhodopsin photocurrent kinetics

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Stefanescu, Roxana; Talathi, Sachin

    2013-03-01

    Optogenetics is a rapidly developing novel optical stimulation technique that employs light activated ion channels to excite (using channelrhodopsin (ChR)) or suppress (using halorhodopsin (HR)) impulse activity in neurons with high temporal and spatial resolution. This technique holds enormous potential to externally control activity states in neuronal networks. The channel kinetics of ChR and HR are well understood and amenable for mathematical modeling. Significant progress has been made in recent years to develop models for ChR channel kinetics. To date however, there is no model to mimic photocurrents produced by HR. Here, we report the first model developed for HR photocurrents based on a four-state model of the HR photocurrent kinetics. The model provides an excellent fit (root-mean-square error of 3.1862x10-4, to an empirical profile of experimentally measured HR photocurrents. In combination, mathematical models for ChR and HR photocurrents can provide effective means to design test light based control systems to regulate neural activity, which in turn may have implications for the development of novel light based stimulation paradigms for brain disease control. I would like to thank the University of Florida and the Physics Research Experience for Undergraduates (REU) program, funded through NSF DMR-1156737. This research was also supported through start-up funds provided to Dr. Sachin Talathi

  2. Computational modeling and engineering in pediatric and congenital heart disease

    PubMed Central

    Marsden, Alison L.; Feinstein, Jeffrey A.

    2015-01-01

    Purpose of review Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single ventricle patients, and provide an overview of emerging areas. Recent Findings Multiscale modeling combining patient specific hemodynamics with reduced order (i.e. mathematically and computationally simplified) circulatory models has become the defacto standard for modeling local hemodynamics and “global” circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods, (e.g. fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally-derived surgical methods for single ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot, (and pulmonary tree), and circulatory support. Summary Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases. PMID:26262579

  3. COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION

    EPA Science Inventory

    In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.

  4. Computational Modeling, Formal Analysis, and Tools for Systems Biology

    PubMed Central

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science. PMID:26795950

  5. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  6. Computer models and output, Spartan REM: Appendix B

    NASA Technical Reports Server (NTRS)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.

  7. An analysis of symbolic linguistic computing models in decision making

    NASA Astrophysics Data System (ADS)

    Rodríguez, Rosa M.; Martínez, Luis

    2013-01-01

    It is common that experts involved in complex real-world decision problems use natural language for expressing their knowledge in uncertain frameworks. The language is inherent vague, hence probabilistic decision models are not very suitable in such cases. Therefore, other tools such as fuzzy logic and fuzzy linguistic approaches have been successfully used to model and manage such vagueness. The use of linguistic information implies to operate with such a type of information, i.e. processes of computing with words (CWW). Different schemes have been proposed to deal with those processes, and diverse symbolic linguistic computing models have been introduced to accomplish the linguistic computations. In this paper, we overview the relationship between decision making and CWW, and focus on symbolic linguistic computing models that have been widely used in linguistic decision making to analyse if all of them can be considered inside of the CWW paradigm.

  8. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  9. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  10. A computational model for a regenerator

    NASA Technical Reports Server (NTRS)

    Gary, J.; Daney, D. E.; Radebaugh, R.

    1985-01-01

    This paper concerns a numerical model of a regenerator running at very low temperatures. The model consists of the usual three equations for a compressible fluid with an additional equation for a matrix temperature. The main difficulty with the model is the very low Mach number (approximately 1.E-3). The divergence of the velocity is not small, the pressure divergence is small, and the pressure fluctuation in time is not small. An asymptotic expansion based on the bounded derivative method of Kreiss is used to give a reduced model which eliminates acoustic waves. The velocity is then determined by a two-point boundary value problem which does not contain a time derivative. The solution obtained from the reduced system is compared with the numerical solution of the original system.

  11. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  12. Enhanced absorption cycle computer model. Final report

    SciTech Connect

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.

  13. Computer modeling of high intensity solar cells

    NASA Astrophysics Data System (ADS)

    Gray, J. L.; Lundstrom, M. S.; Schwartz, R. J.

    1987-01-01

    The purpose of this program is to provide general analytic support to Sandia National Laboratories' effort to develop high efficiency, high concentration solar cells. This report covers work performed between November 5, 1984, and December 31, 1985, and includes reprints of three papers presented at the 18th IEEE Photovoltaic Specialists' Conference. In the first paper, the factors that presently prevent achieving the predicted theoretical efficiencies (in excess of 30% at concentration) are examined. It is demonstrated, by two-dimensional computer simulations, that these efficiencies might be obtained by improved light trapping techniques and by fabrication of low resistance heteroface contacts. The second paper examines the Rose-Weaver lifetime and surface recombination velocity measurement technique. It is shown that the very small uncertainties in the measured quantities lead to large uncertainties in the computed lifetime and surface recombination velocity. This leads to radically different interpretations of how the recombination is distributed throughout the device, and therefore limits the usefulness of the measurement technique. Design options and constraints of GaAs concentrator cells are examined in the third paper. The effectiveness of various design options is assessed. It is shown that although such design options are of little use in increasing the efficiency of heteroface cells, they can improve the efficiency of shallow junction cells so that it is comparable to that of heteroface cells, In addition, documentation describing the use of both the one- and two-dimensional silicon codes, SCAP1D and SCAP2D, as well as the one-dimensional AlGaAs solar cell simulation code is included.

  14. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    ERIC Educational Resources Information Center

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  15. Supersonic jet and crossflow interaction: Computational modeling

    NASA Astrophysics Data System (ADS)

    Hassan, Ez; Boles, John; Aono, Hikaru; Davis, Douglas; Shyy, Wei

    2013-02-01

    The supersonic jet-in-crossflow problem which involves shocks, turbulent mixing, and large-scale vortical structures, requires special treatment for turbulence to obtain accurate solutions. Different turbulence modeling techniques are reviewed and compared in terms of their performance in predicting results consistent with the experimental data. Reynolds-averaged Navier-Stokes (RANS) models are limited in prediction of fuel structure due to their inability to accurately capture unsteadiness in the flow. Large eddy simulation (LES) is not yet practical due to prohibitively large grid requirement near the wall. Hybrid RANS/LES can offer reasonable compromise between accuracy and efficiency. The hybrid models are based on various approaches such as explicit blending of RANS and LES, detached eddy simulation (DES), and filter-based multi-scale models. In particular, they can be used to evaluate the turbulent Schmidt number modeling techniques used in jet-in-crossflow simulations. Specifically, an adaptive approach can be devised by utilizing the information obtained from the resolved field to help assign the value of turbulent Schmidt number in the sub-filter field. The adaptive approach combined with the multi-scale model improves the results especially when highly refined grids are needed to resolve small structures involved in the mixing process.

  16. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI

  17. Revised OPTSA Model. Volume 2. Computer Program Documentation

    DTIC Science & Technology

    1975-06-01

    UNGUSSIFIED »gCUWITV CLAltlFICATIOM QW THII Pkal fWtitm 0*l« Cn)«rf« REPORT DOCUMENTATION PAGE P-1111 2 . OOVT ACCCUION NO 4. TITLE rand...SufcrlrUJ REVISED OPTSA MODEL Volume 2 : Computer Program Documentation 7. AUTMOUCt; Lowell Bruce Anderson Jerome Bracken Eleanor L. Schwartz t...ATC SCHOOL MawreiiieY. GALIFORNIA 93940 PAPER P-1111 REVISED OPTSA MODEL Volume 2 : Computer Program Documentation Lowell Bruce Anderson

  18. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  19. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  20. Computational model of miniature pulsating heat pipes

    SciTech Connect

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  1. Practical Use of Computationally Frugal Model Analysis Methods.

    PubMed

    Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2016-03-01

    Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.

  2. Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2010-10-01

    Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.

  3. Computer models to study uterine activation at labour.

    PubMed

    Sharp, G C; Saunders, P T K; Norman, J E

    2013-11-01

    Improving our understanding of the initiation of labour is a major aim of modern obstetric research, in order to better diagnose and treat pregnant women in which the process occurs abnormally. In particular, increased knowledge will help us identify the mechanisms responsible for preterm labour, the single biggest cause of neonatal morbidity and mortality. Attempts to improve our understanding of the initiation of labour have been restricted by the inaccessibility of gestational tissues to study during pregnancy and at labour, and by the lack of fully informative animal models. However, computer modelling provides an exciting new approach to overcome these restrictions and offers new insights into uterine activation during term and preterm labour. Such models could be used to test hypotheses about drugs to treat or prevent preterm labour. With further development, an effective computer model could be used by healthcare practitioners to develop personalized medicine for patients on a pregnancy-by-pregnancy basis. Very promising work is already underway to build computer models of the physiology of uterine activation and contraction. These models aim to predict changes and patterns in uterine electrical excitation during term labour. There have been far fewer attempts to build computer models of the molecular pathways driving uterine activation and there is certainly scope for further work in this area. The integration of computer models of the physiological and molecular mechanisms that initiate labour will be particularly useful.

  4. Computer modeling of electrical performance of detonators

    SciTech Connect

    Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.; Lyons, G.R.

    1995-05-01

    An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.

  5. Computer Models Simulate Fine Particle Dispersion

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  6. Computational social network modeling of terrorist recruitment.

    SciTech Connect

    Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

  7. A Computer Model for Direct Carbonate Fuel Cells

    SciTech Connect

    Ding, J.; Patel, P.S.; Farooque, M.; Maru, H.C.

    1997-04-01

    A 3-D computer model, describing fluid flow, heat and mass transfer, and chemical and electrochemical reaction processes, has been developed for guiding the direct carbonate fuel cell (DFC) stack design. This model is able to analyze the direct internal reforming (DIR) as well as the integrated IIR (indirect internal reforming)-DIR designs. Reasonable agreements between computed and fuel cell tested results, such as flow variations, temperature distributions, cell potentials, and exhaust gas compositions as well as methane conversions, were obtained. Details of the model and comparisons of the modeling results with experimental DFC stack data are presented in the paper.

  8. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  9. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  10. Computer Modeling of Ceramic Boride Composites

    DTIC Science & Technology

    2014-11-01

    draw conclusions about the structure of the interface (to build geometric and energy models); 3) electronic, which give quantum - mechanical description... mechanical properties of quasi-binary composites taking into account interaction between phases at the interface. Calculation of linear thermal...difficult task in the framework of quantum - mechanical calculations, still a question of dependency of the mechanical characteristics on the temperature

  11. A Computational Model of Spatial Development

    NASA Astrophysics Data System (ADS)

    Hiraki, Kazuo; Sashima, Akio; Phillips, Steven

    Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.

  12. Computer Model Simulates Air Pollution Over Roads

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1972

    1972-01-01

    A sophisticated modeling technique which predicts pollutant movement accurately and may aid in the design of new freeways is reported. EXPLOR (Examination of Pollution Levels of Roadways) was developed specifically to predict pollutant concentrations in a milewide corridor traversed by a roadway. (BL)

  13. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  14. Computational Modeling and High Performance Computing in Advanced Materials Processing, Synthesis, and Design

    DTIC Science & Technology

    2014-12-07

    research efforts in this project focused on the synergistic coupling of: • Computational material science and mechanics of hybrid and light weight polymeric...MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 11 A-l-l: Atomistic Modeling in Polymer Nanocomposite Systems...DETAILED TECHNICAL REPORT 16 A-1: COMPUTATIONAL MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 16 A-l-l

  15. Interactive computational models of particle dynamics using virtual reality

    SciTech Connect

    Canfield, T.; Diachin, D.; Freitag, L.; Heath, D.; Herzog, J.; Michels, W.

    1996-12-31

    An increasing number of industrial applications rely on computational models to reduce costs in product design, development, and testing cycles. Here, the authors discuss an interactive environment for the visualization, analysis, and modification of computational models used in industrial settings. In particular, they focus on interactively placing massless, massed, and evaporating particulate matter in computational fluid dynamics applications.they discuss the numerical model used to compute the particle pathlines in the fluid flow for display and analysis. They briefly describe the toolkits developed for vector and scalar field visualization, interactive particulate source placement, and a three-dimensional GUI interface. This system is currently used in two industrial applications, and they present the tools in the context of these applications. They summarize the current state of the project and offer directions for future research.

  16. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix.

  17. Computational challenges in modeling and simulating living matter

    NASA Astrophysics Data System (ADS)

    Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling

    2016-12-01

    Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.

  18. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  19. Images as drivers of progress in cardiac computational modelling.

    PubMed

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A; Bishop, Martin J; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2014-08-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved.

  20. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  1. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  2. Instability phenomena in plasticity: Modelling and computation

    NASA Astrophysics Data System (ADS)

    Stein, E.; Steinmann, P.; Miehe, C.

    1995-12-01

    We presented aspects and results related to the broad field of strain localization with special focus on large strain elastoplastic response. Therefore, we first re-examined issues related to the classification of discontinuities and the classical description of localization with a particular emphasis on an Eulerian geometric representation. We touched the problem of mesh objectivity and discussed results of a particular regularization method, namely the micropolar approach. Generally, regularization has to preserve ellipticity and to reflect the underlying physics. For example ductile materials have to be modelled including viscous effects whereas geomaterials are adequately described by the micropolar approach. Then we considered localization phenomena within solids undergoing large strain elastoplastic deformations. Here, we documented the influence of isotropic damage on the failure analysis. Next, the interesting influence of an orthotropic yield condition on the spatial orientation of localized zones has been studied. Finally, we investigated the localization condition for an algorithmic model of finite strain single crystal plasticity.

  3. Computer models for amorphous silicon hydrides

    NASA Astrophysics Data System (ADS)

    Mousseau, Normand; Lewis, Laurent J.

    1990-02-01

    A procedure for generating fully coordinated model structures appropriate to hydrogenated amorphous semiconductors is described. The hydrogen is incorporated into an amorphous matrix using a bond-switching process similar to that proposed by Wooten, Winer, and Weaire, which ensures that fourfold coordination is preserved. After each inclusion of hydrogen, the structure is relaxed using a finite-temperature Monte Carlo algorithm. The method is applied to a-Si:H at various hydrogen concentrations. The resulting model structures are found to be in excellent agreement with recent neutron-scattering measurements on a sample with 12 at. % H. Our prescription, which is essentially nonlocal, allows great flexibility and can easily be extended to related systems.

  4. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.

  5. Computational modeling of nuclear thermal rockets

    NASA Technical Reports Server (NTRS)

    Peery, Steven D.

    1993-01-01

    The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.

  6. Computer Modeling of Complete IC Fabrication Process.

    DTIC Science & Technology

    1984-01-01

    Venson Shaw 10. C. S. Chang 11. Elizabeth Batson 12. Richard Pinto 13. Jacques Beauduoin SPEAKERS: 1. Tayo Akinwande 2. Dimitri Antoniadis 3. Walter...Numerical Model of Polysilicon Emitter Contacts in Bipolar Transistors,’ To be published IEEE Trans. Electron Devices. [34] M. R. Pinto , R. W. Dutton...Received PhD, Spring 1082) Balaji Swaminathan (Received PhD, Spring 1983) Len Mei Research Associate Michael Kump Research Assistant Mark Pinto Research

  7. Computational Modeling of Lipid Metabolism in Yeast

    PubMed Central

    Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda

    2016-01-01

    Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism. PMID:27730126

  8. Computational Modeling of Lipid Metabolism in Yeast.

    PubMed

    Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda

    2016-01-01

    Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism.

  9. Oxygen and seizure dynamics: II. Computational modeling

    PubMed Central

    Wei, Yina; Ullah, Ghanim; Ingram, Justin

    2014-01-01

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  10. A cognitive model for problem solving in computer science

    NASA Astrophysics Data System (ADS)

    Parham, Jennifer R.

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specific domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the differences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software

  11. Simulation model of load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  12. Computer generation of structural models of amorphous Si and Ge

    NASA Astrophysics Data System (ADS)

    Wooten, F.; Winer, K.; Weaire, D.

    1985-04-01

    We have developed and applied a computer algorithm that generates realistic random-network models of a-Si with periodic boundary conditions. These are the first models to have correlation functions that show no serious deiscrepancy with experiment. The algorithm provides a much-needed systematic approach to model construction that can be used to generate models of a large class of amorphous materials.

  13. Computer model of cardiovascular control system responses to exercise

    NASA Technical Reports Server (NTRS)

    Croston, R. C.; Rummel, J. A.; Kay, F. J.

    1973-01-01

    Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.

  14. Computational models for the nonlinear analysis of reinforced concrete plates

    NASA Technical Reports Server (NTRS)

    Hinton, E.; Rahman, H. H. A.; Huq, M. M.

    1980-01-01

    A finite element computational model for the nonlinear analysis of reinforced concrete solid, stiffened and cellular plates is briefly outlined. Typically, Mindlin elements are used to model the plates whereas eccentric Timoshenko elements are adopted to represent the beams. The layering technique, common in the analysis of reinforced concrete flexural systems, is incorporated in the model. The proposed model provides an inexpensive and reasonably accurate approach which can be extended for use with voided plates.

  15. Computational needs for modelling accelerator components

    SciTech Connect

    Hanerfeld, H.

    1985-06-01

    The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs.

  16. Paradox of integration-A computational model

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  17. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  18. Optimal allocation of computational resources in hydrogeological models under uncertainty

    NASA Astrophysics Data System (ADS)

    Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.

    2015-09-01

    Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the

  19. The computation of standard solar models

    NASA Technical Reports Server (NTRS)

    Ulrich, Roger K.; Cox, Arthur N.

    1991-01-01

    Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.

  20. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  1. A propagation model of computer virus with nonlinear vaccination probability

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi

    2014-01-01

    This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.

  2. Computational Neuroscience: Modeling the Systems Biology of Synaptic Plasticity

    PubMed Central

    Kotaleski, Jeanette Hellgren; Blackwell, Kim T.

    2016-01-01

    Preface Synaptic plasticity is a mechanism proposed to underlie learning and memory. The complexity of the interactions between ion channels, enzymes, and genes involved in synaptic plasticity impedes a deep understanding of this phenomenon. Computer modeling is an approach to investigate the information processing that is performed by signaling pathways underlying synaptic plasticity. In the past few years, new software developments that blend computational neuroscience techniques with systems biology techniques have allowed large-scale, quantitative modeling of synaptic plasticity in neurons. We highlight significant advancements produced by these modeling efforts and introduce promising approaches that utilize advancements in live cell imaging. PMID:20300102

  3. Computational science: shifting the focus from tools to models

    PubMed Central

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728

  4. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  5. Computational Modeling of Laser-Cell Biochemical Interactions

    DTIC Science & Technology

    2010-12-31

    The charts (from upper left, and across to lower right) present the modeled response in the RPE cell of Vitamin C (asc/ ascH ), some reactive oxygen...Husinsky, J., Seiser, B., Edthofer, F., Fekete, B., Farmer , L., and Lund, D., “Ex vivo and computer model study on retinal thermal laser-induced damage

  6. Operation of the computer model for microenvironment solar exposure

    NASA Technical Reports Server (NTRS)

    Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.

  7. Bootstrapping the Lexicon: A Computational Model of Infant Speech Segmentation.

    ERIC Educational Resources Information Center

    Batchelder, Eleanor Olds

    2002-01-01

    Details BootLex, a model using distributional cues to build a lexicon and achieving significant segmentation results with English, Japanese, and Spanish; child- and adult-directed speech, and written text; and variations in coding structure. Compares BootLex with three groups of computational models of the infant segmentation process. Discusses…

  8. Computer Modelling of Biological Molecules: Free Resources on the Internet.

    ERIC Educational Resources Information Center

    Millar, Neil

    1996-01-01

    Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet.…

  9. Modeling civil violence: An agent-based computational approach

    PubMed Central

    Epstein, Joshua M.

    2002-01-01

    This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450

  10. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  11. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Lee, Hee-Sun

    2015-01-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…

  12. Industry-Wide Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir (Compiler)

    1995-01-01

    This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.

  13. Interrogative Model of Inquiry and Computer-Supported Collaborative Learning.

    ERIC Educational Resources Information Center

    Hakkarainen, Kai; Sintonen, Matti

    2002-01-01

    Examines how the Interrogative Model of Inquiry (I-Model), developed for the purposes of epistemology and philosophy of science, could be applied to analyze elementary school students' process of inquiry in computer-supported learning. Suggests that the interrogative approach to inquiry can be productively applied for conceptualizing inquiry in…

  14. Computer Simulation of Small Group Decisions: Model Three.

    ERIC Educational Resources Information Center

    Hare, A.P.; Scheiblechner, Hartmann

    In a test of three computer models to simulate group decisions, data were used from 31 American and Austrian groups on a total of 307 trials. The task for each group was to predict a series of answers of an unknown subject on a value-orientation questionnaire, after being given a sample of his typical responses. The first model, used the mean of…

  15. Computational fluid dynamics modeling for emergency preparedness & response

    SciTech Connect

    Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.

    1995-07-01

    Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.

  16. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  17. Analysis of computational modeling techniques for complete rotorcraft configurations

    NASA Astrophysics Data System (ADS)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  18. Applying Performance Models to Understand Data-Intensive Computing Efficiency

    DTIC Science & Technology

    2010-05-01

    data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local

  19. Identification of Computational and Experimental Reduced-Order Models

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.

    2003-01-01

    The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.

  20. Computational modelling of memory retention from synapse to behaviour

    NASA Astrophysics Data System (ADS)

    van Rossum, Mark C. W.; Shippi, Maria

    2013-03-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.

  1. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on ;Big Data and Predictive Computational Modeling; that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  2. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  3. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  4. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  5. The GOURD model of human-computer interaction

    SciTech Connect

    Goldbogen, G.

    1996-12-31

    This paper presents a model, the GOURD model, that can be used to measure the goodness of {open_quotes}interactivity{close_quotes} of an interface design and qualifies how to improve the design. The GOURD model describes what happens to the computer and to the human during a human-computer interaction. Since the interaction is generally repeated, the traversal of the model repeatedly is similar to a loop programming structure. Because the model measures interaction over part or all of the application, it can also be used as a classifier of the part or the whole application. But primarily, the model is used as a design guide and a predictor of effectiveness.

  6. Biological networks 101: computational modeling for molecular biologists.

    PubMed

    Scholma, Jetse; Schivo, Stefano; Urquidi Camacho, Ricardo A; van de Pol, Jaco; Karperien, Marcel; Post, Janine N

    2014-01-01

    Computational modeling of biological networks permits the comprehensive analysis of cells and tissues to define molecular phenotypes and novel hypotheses. Although a large number of software tools have been developed, the versatility of these tools is limited by mathematical complexities that prevent their broad adoption and effective use by molecular biologists. This study clarifies the basic aspects of molecular modeling, how to convert data into useful input, as well as the number of time points and molecular parameters that should be considered for molecular regulatory models with both explanatory and predictive potential. We illustrate the necessary experimental preconditions for converting data into a computational model of network dynamics. This model requires neither a thorough background in mathematics nor precise data on intracellular concentrations, binding affinities or reaction kinetics. Finally, we show how an interactive model of crosstalk between signal transduction pathways in primary human articular chondrocytes allows insight into processes that regulate gene expression.

  7. Using GPUs to Meet Next Generation Weather Model Computational Requirements

    NASA Astrophysics Data System (ADS)

    Govett, M.; Hart, L.; Henderson, T.; Middlecoff, J.; Tierney, C.

    2008-12-01

    Weather prediction goals within the Earth Science Research Laboratory at NOAA require significant increases in model resolution (~1 km) and forecast durations (~60 days) to support expected requirements in 5 years or less. However, meeting these goals will likely require at least 100k dedicated cores. Few systems will exist that could even run such a large problem, much less house a facility that could provide the necessary power and cooling requirements. To meet our goals we are exploring alternative technologies, including Graphics Processing Units (GPU), that could provide significantly more computational performance and reduced power and cooling requirements, at a lower cost than traditional high-performance computing solutions. Our current global numerical weather prediction model, the Flow following finite-volume Isocahedral Model (FIM, http://fim.noaa.gov), is still early in its development but is already demonstrating good fidelity and excellent scalability to 1000s of cores. The icosahedral grid has several complexities not present in more traditional Cartesian grids including polygons with different numbers of sides (five and six) and non-trivial computation of locations of neighboring grid cells. FIM uses an indirect addressing scheme that yields very compact code despite these complexities. We have extracted computational kernels that encompass functions likely to take the most time at higher resolutions including all that have horizontal dependencies. Kernels implement equations for computing anti-diffusive flux-corrected transport across cell edges, calculating forcing terms and time-step differencing, and re-computing time-dependent vertical coordinates. We are extending these kernels to explore performance of GPU-specific optimizations. We will present initial performance results from the computational kernels of the FIM model, as well as the challenges related to porting code with indirect memory references to the NVIDIA GPUs. Results of this

  8. Evaluation of a Computational Model of Situational Awareness

    NASA Technical Reports Server (NTRS)

    Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)

    2000-01-01

    Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.

  9. Learning from humans: computational modeling of face recognition.

    PubMed

    Wallraven, Christian; Schwaninger, Adrian; Bülthoff, Heinrich H

    2005-12-01

    In this paper, we propose a computational architecture of face recognition based on evidence from cognitive research. Several recent psychophysical experiments have shown that humans process faces by a combination of configural and component information. Using an appearance-based implementation of this architecture based on low-level features and their spatial relations, we were able to model aspects of human performance found in psychophysical studies. Furthermore, results from additional computational recognition experiments show that our framework is able to achieve excellent recognition performance even under large view rotations. Our interdisciplinary study is an example of how results from cognitive research can be used to construct recognition systems with increased performance. Finally, our modeling results also make new experimental predictions that will be tested in further psychophysical studies, thus effectively closing the loop between psychophysical experimentation and computational modeling.

  10. Computational ocean acoustics: Advances in 3D ocean acoustic modeling

    NASA Astrophysics Data System (ADS)

    Schmidt, Henrik; Jensen, Finn B.

    2012-11-01

    The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].

  11. DNA computation model to solve 0-1 programming problem.

    PubMed

    Zhang, Fengyue; Yin, Zhixiang; Liu, Bo; Xu, Jin

    2004-01-01

    0-1 programming problem is an important problem in opsearch with very widespread applications. In this paper, a new DNA computation model utilizing solution-based and surface-based methods is presented to solve the 0-1 programming problem. This model contains the major benefits of both solution-based and surface-based methods; including vast parallelism, extraordinary information density and ease of operation. The result, verified by biological experimentation, revealed the potential of DNA computation in solving complex programming problem.

  12. Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy

    PubMed Central

    Kaddi, Chanchala D; Phan, John H; Wang, May D

    2016-01-01

    Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967

  13. Mathematical modelling in the computer-aided process planning

    NASA Astrophysics Data System (ADS)

    Mitin, S.; Bochkarev, P.

    2016-04-01

    This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.

  14. Computation of eigenfrequencies for equilibrium models including turbulent pressure

    NASA Astrophysics Data System (ADS)

    Sonoi, T.; Belkacem, K.; Dupret, M.-A.; Samadi, R.; Ludwig, H.-G.; Caffau, E.; Mosser, B.

    2017-03-01

    Context. The space-borne missions CoRoT and Kepler have provided a wealth of highly accurate data. However, our inability to properly model the upper-most region of solar-like stars prevents us from making the best of these observations. This problem is called "surface effect" and a key ingredient to solve it is turbulent pressure for the computation of both the equilibrium models and the oscillations. While 3D hydrodynamic simulations help to include properly the turbulent pressure in the equilibrium models, the way this surface effect is included in the computation of stellar oscillations is still subject to uncertainties. Aims: We aim at determining how to properly include the effect of turbulent pressure and its Lagrangian perturbation in the adiabatic computation of the oscillations. We also discuss the validity of the gas-gamma model and reduced gamma model approximations, which have been used to compute adiabatic oscillations of equilibrium models including turbulent pressure. Methods: We use a patched model of the Sun with an inner part constructed by a 1D stellar evolution code (CESTAM) and an outer part by the 3D hydrodynamical code (CO5BOLD). Then, the adiabatic oscillations are computed using the ADIPLS code for the gas-gamma and reduced gamma model approximations and with the MAD code imposing the adiabatic condition on an existing time-dependent convection formalism. Finally, all those results are compared to the observed solar frequencies. Results: We show that the computation of the oscillations using the time-dependent convection formalism in the adiabatic limit improves significantly the agreement with the observed frequencies compared to the gas-gamma and reduced gamma model approximations. Of the components of the perturbation of the turbulent pressure, the perturbation of the density and advection term is found to contribute most to the frequency shift. Conclusions: The turbulent pressure is certainly the dominant factor responsible for the

  15. Computational modeling of drug response with applications to neuroscience.

    PubMed

    Herwig, Ralf

    2014-12-01

    The development of novel high-throughput technologies has opened up the opportunity to deeply characterize patient tissues at various molecular levels and has given rise to a paradigm shift in medicine towards personalized therapies. Computational analysis plays a pivotal role in integrating the various genome data and understanding the cellular response to a drug. Based on that data, molecular models can be constructed that incorporate the known downstream effects of drug-targeted receptor molecules and that predict optimal therapy decisions. In this article, we describe the different steps in the conceptual framework of computational modeling. We review resources that hold information on molecular pathways that build the basis for constructing the model interaction maps, highlight network analysis concepts that have been helpful in identifying predictive disease patterns, and introduce the basic concepts of kinetic modeling. Finally, we illustrate this framework with selected studies related to the modeling of important target pathways affected by drugs.

  16. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  17. Simulation models for computational plasma physics: Concluding report

    SciTech Connect

    Hewett, D.W.

    1994-03-05

    In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell`s equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture`s (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993).

  18. A cognitive computational model inspired by the immune system response.

    PubMed

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.

  19. Mathematical and Computational Modeling of Polymer Exchange Membrane Fuel Cells

    NASA Astrophysics Data System (ADS)

    Ulusoy, Sehribani

    In this thesis a comprehensive review of fuel cell modeling has been given and based on the review, a general mathematical fuel cell model has been developed in order to understand the physical phenomena governing the fuel cell behavior and in order to contribute to the efforts investigating the optimum performance at different operating conditions as well as with different physical parameters. The steady state, isothermal model presented here accounts for the combined effects of mass and species transfer, momentum conservation, electrical current distribution through the gas channels, the electrodes and the membrane, and the electrochemical kinetics of the reactions in the anode and cathode catalyst layers. One of the important features of the model is that it proposes a simpler modified pseudo-homogeneous/agglomerate catalyst layer model which takes the advantage of the simplicity of pseudo-homogenous modeling while taking into account the effects of the agglomerates in the catalyst layer by using experimental geometric parameters published. The computation of the general mathematical model can be accomplished in 3D, 2D and 1D with the proper assumptions. Mainly, there are two computational domains considered in this thesis. The first modeling domain is a 2D Membrane Electrode Assembly (MEA) model including the modified agglomerate/pseudo-homogeneous catalyst layer modeling with consistent treatment of water transport in the MEA while the second domain presents a 3D model with different flow filed designs: straight, stepped and tapered. COMSOL Multiphysics along with Batteries and Fuel Cell Module have been used for 2D & 3D model computations while ANSYS FLUENT PEMFC Module has been used for only 3D two-phase computation. Both models have been validated with experimental data. With 2D MEA model, the effects of temperature and water content of the membrane as well as the equivalent weight of the membrane on the performance have been addressed. 3D COMSOL simulation

  20. Increasing Computational Efficiency of Cochlear Models Using Boundary Layers

    PubMed Central

    Alkhairy, Samiya A.; Shera, Christopher A.

    2016-01-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  1. Increasing computational efficiency of cochlear models using boundary layers

    NASA Astrophysics Data System (ADS)

    Alkhairy, Samiya A.; Shera, Christopher A.

    2015-12-01

    Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution

  2. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  3. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  4. Developing a computational model of human hand kinetics using AVS

    SciTech Connect

    Abramowitz, Mark S.

    1996-05-01

    As part of an ongoing effort to develop a finite element model of the human hand at the Institute for Scientific Computing Research (ISCR), this project extended existing computational tools for analyzing and visualizing hand kinetics. These tools employ a commercial, scientific visualization package called AVS. FORTRAN and C code, originally written by David Giurintano of the Gillis W. Long Hansen`s Disease Center, was ported to a different computing platform, debugged, and documented. Usability features were added and the code was made more modular and readable. When the code is used to visualize bone movement and tendon paths for the thumb, graphical output is consistent with expected results. However, numerical values for forces and moments at the thumb joints do not yet appear to be accurate enough to be included in ISCR`s finite element model. Future work includes debugging the parts of the code that calculate forces and moments and verifying the correctness of these values.

  5. Computational Flow Modeling of Human Upper Airway Breathing

    NASA Astrophysics Data System (ADS)

    Mylavarapu, Goutham

    Computational modeling of biological systems have gained a lot of interest in biomedical research, in the recent past. This thesis focuses on the application of computational simulations to study airflow dynamics in human upper respiratory tract. With advancements in medical imaging, patient specific geometries of anatomically accurate respiratory tracts can now be reconstructed from Magnetic Resonance Images (MRI) or Computed Tomography (CT) scans, with better and accurate details than traditional cadaver cast models. Computational studies using these individualized geometrical models have advantages of non-invasiveness, ease, minimum patient interaction, improved accuracy over experimental and clinical studies. Numerical simulations can provide detailed flow fields including velocities, flow rates, airway wall pressure, shear stresses, turbulence in an airway. Interpretation of these physical quantities will enable to develop efficient treatment procedures, medical devices, targeted drug delivery etc. The hypothesis for this research is that computational modeling can predict the outcomes of a surgical intervention or a treatment plan prior to its application and will guide the physician in providing better treatment to the patients. In the current work, three different computational approaches Computational Fluid Dynamics (CFD), Flow-Structure Interaction (FSI) and Particle Flow simulations were used to investigate flow in airway geometries. CFD approach assumes airway wall as rigid, and relatively easy to simulate, compared to the more challenging FSI approach, where interactions of airway wall deformations with flow are also accounted. The CFD methodology using different turbulence models is validated against experimental measurements in an airway phantom. Two case-studies using CFD, to quantify a pre and post-operative airway and another, to perform virtual surgery to determine the best possible surgery in a constricted airway is demonstrated. The unsteady

  6. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  7. A Computer Model for Analyzing Volatile Removal Assembly

    NASA Technical Reports Server (NTRS)

    Guo, Boyun

    2010-01-01

    A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.

  8. Systems Biology in Immunology – A Computational Modeling Perspective

    PubMed Central

    Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.

    2011-01-01

    Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182

  9. Cartographic Modeling: Computer-assisted Analysis of Spatially Defined Neighborhoods

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Tomlin, C. D.

    1982-01-01

    Cartographic models addressing a wide variety of applications are composed of fundamental map processing operations. These primitive operations are neither data base nor application-specific. By organizing the set of operations into a mathematical-like structure, the basis for a generalized cartographic modeling framework can be developed. Among the major classes of primitive operations are those associated with reclassifying map categories, overlaying maps, determining distance and connectivity, and characterizing cartographic neighborhoods. The conceptual framework of cartographic modeling is established and techniques for characterizing neighborhoods are used as a means of demonstrating some of the more sophisticated procedures of computer-assisted map analysis. A cartographic model for assessing effective roundwood supply is briefly described as an example of a computer analysis. Most of the techniques described have been implemented as part of the map analysis package developed at the Yale School of Forestry and Environmental Studies.

  10. Computational modeling of the human atrial anatomy and electrophysiology.

    PubMed

    Dössel, Olaf; Krueger, Martin W; Weber, Frank M; Wilhelms, Mathias; Seemann, Gunnar

    2012-08-01

    This review article gives a comprehensive survey of the progress made in computational modeling of the human atria during the last 10 years. Modeling the anatomy has emerged from simple "peanut"-like structures to very detailed models including atrial wall and fiber direction. Electrophysiological models started with just two cellular models in 1998. Today, five models exist considering e.g. details of intracellular compartments and atrial heterogeneity. On the pathological side, modeling atrial remodeling and fibrotic tissue are the other important aspects. The bridge to data that are measured in the catheter laboratory and on the body surface (ECG) is under construction. Every measurement can be used either for model personalization or for validation. Potential clinical applications are briefly outlined and future research perspectives are suggested.

  11. Computer modeling of lung cancer diagnosis-to-treatment process.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  12. Computer simulations for internal dosimetry using voxel models.

    PubMed

    Kinase, Sakae; Mohammadi, Akram; Takahashi, Masa; Saito, Kimiaki; Zankl, Maria; Kramer, Richard

    2011-07-01

    In the Japan Atomic Energy Agency, several studies have been conducted on the use of voxel models for internal dosimetry. Absorbed fractions (AFs) and S values have been evaluated for preclinical assessments of radiopharmaceuticals using human voxel models and a mouse voxel model. Computational calibration of in vivo measurement system has been also made using Japanese and Caucasian voxel models. In addition, for radiation protection of the environment, AFs have been evaluated using a frog voxel model. Each study was performed by using Monte Carlo simulations. Consequently, it was concluded that these data of Monte Carlo simulations and voxel models could adequately reproduce measurement results. Voxel models were found to be a significant tool for internal dosimetry since the models are anatomically realistic. This fact indicates that several studies on correction of the in vivo measurement efficiency for the variability of human subjects and interspecies scaling of organ doses will succeed.

  13. Computer modeling of lung cancer diagnosis-to-treatment process

    PubMed Central

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick

    2015-01-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181

  14. Theoretical Investigation of Optical Computing Based on Neural Network Models.

    DTIC Science & Technology

    1987-09-29

    34 Cognitive and Psychological Computation with Neu- ral Models," IEEE Trans. Sys., Man, and cyber., SMC-13, p. 799, 1983. 20’ K. Nakano, "Association-A...7),482(1986). 211 F. Rosenblatt, Principles of Neurodynamics : Perceptron and the The- ory of Brain Mechanisms, Spartan Books, Washington,(1961). 22

  15. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    SciTech Connect

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-07-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  16. Implementing and Assessing Computational Modeling in Introductory Mechanics

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.

    2012-01-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational…

  17. Affective Responses and Cognitive Models of the Computing Environment.

    ERIC Educational Resources Information Center

    Wallace, Andrew R.; Sinclair, Kenneth E.

    New electronic technologies provide powerful tools for managing and processing the rapidly increasing amounts of information available for learning; teachers, however, have often been slow in integrating computers into the curriculum. This study addresses the question of how prospective teachers construct affective and cognitive models about…

  18. Molecular Modeling and Computational Chemistry at Humboldt State University.

    ERIC Educational Resources Information Center

    Paselk, Richard A.; Zoellner, Robert W.

    2002-01-01

    Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…

  19. Mathematical and computer modeling of component surface shaping

    NASA Astrophysics Data System (ADS)

    Lyashkov, A.

    2016-04-01

    The process of shaping technical surfaces is an interaction of a tool (a shape element) and a component (a formable element or a workpiece) in their relative movements. It was established that the main objects of formation are: 1) a discriminant of a surfaces family, formed by the movement of the shape element relatively the workpiece; 2) an enveloping model of the real component surface obtained after machining, including transition curves and undercut lines; 3) The model of cut-off layers obtained in the process of shaping. When modeling shaping objects there are a lot of insufficiently solved or unsolved issues that make up a single scientific problem - a problem of qualitative shaping of the surface of the tool and then the component surface produced by this tool. The improvement of known metal-cutting tools, intensive development of systems of their computer-aided design requires further improvement of the methods of shaping the mating surfaces. In this regard, an important role is played by the study of the processes of shaping of technical surfaces with the use of the positive aspects of analytical and numerical mathematical methods and techniques associated with the use of mathematical and computer modeling. The author of the paper has posed and has solved the problem of development of mathematical, geometric and algorithmic support of computer-aided design of cutting tools based on computer simulation of the shaping process of surfaces.

  20. A Computational Model of Linguistic Humor in Puns

    ERIC Educational Resources Information Center

    Kao, Justine T.; Levy, Roger; Goodman, Noah D.

    2016-01-01

    Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we…

  1. Scratch as a Computational Modelling Tool for Teaching Physics

    ERIC Educational Resources Information Center

    Lopez, Victor; Hernandez, Maria Isabel

    2015-01-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling…

  2. Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)

    SciTech Connect

    Not Available

    2011-09-01

    In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.

  3. A Computational Model of Early Argument Structure Acquisition

    ERIC Educational Resources Information Center

    Alishahi, Afra; Stevenson, Suzanne

    2008-01-01

    How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning…

  4. A Computer Model of the Cardiovascular System for Effective Learning.

    ERIC Educational Resources Information Center

    Rothe, Carl F.

    1979-01-01

    Described is a physiological model which solves a set of interacting, possibly nonlinear, differential equations through numerical integration on a digital computer. Sample printouts are supplied and explained for effects on the components of a cardiovascular system when exercise, hemorrhage, and cardiac failure occur. (CS)

  5. An Empirical Generative Framework for Computational Modeling of Language Acquisition

    ERIC Educational Resources Information Center

    Waterfall, Heidi R.; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-01-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of…

  6. Design Model for Learner-Centered, Computer-Based Simulations.

    ERIC Educational Resources Information Center

    Hawley, Chandra L.; Duffy, Thomas M.

    This paper presents a model for designing computer-based simulation environments within a constructivist framework for the K-12 school setting. The following primary criteria for the development of simulations are proposed: (1) the problem needs to be authentic; (2) the cognitive demand in learning should be authentic; (3) scaffolding supports a…

  7. Computational Modelling and Simulation Fostering New Approaches in Learning Probability

    ERIC Educational Resources Information Center

    Kuhn, Markus; Hoppe, Ulrich; Lingnau, Andreas; Wichmann, Astrid

    2006-01-01

    Discovery learning in mathematics in the domain of probability based on hands-on experiments is normally limited because of the difficulty in providing sufficient materials and data volume in terms of repetitions of the experiments. Our cooperative, computational modelling and simulation environment engages students and teachers in composing and…

  8. Computational Models of Relational Processes in Cognitive Development

    ERIC Educational Resources Information Center

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  9. Computational analysis of semi-span model test techniques

    NASA Technical Reports Server (NTRS)

    Milholen, William E., II; Chokani, Ndaona

    1996-01-01

    A computational investigation was conducted to support the development of a semi-span model test capability in the NASA LaRC's National Transonic Facility. This capability is required for the testing of high-lift systems at flight Reynolds numbers. A three-dimensional Navier-Stokes solver was used to compute the low-speed flow over both a full-span configuration and a semi-span configuration. The computational results were found to be in good agreement with the experimental data. The computational results indicate that the stand-off height has a strong influence on the flow over a semi-span model. The semi-span model adequately replicates the aerodynamic characteristics of the full-span configuration when a small stand-off height, approximately twice the tunnel empty sidewall boundary layer displacement thickness, is used. Several active sidewall boundary layer control techniques were examined including: upstream blowing, local jet blowing, and sidewall suction. Both upstream tangential blowing, and sidewall suction were found to minimize the separation of the sidewall boundary layer ahead of the semi-span model. The required mass flow rates are found to be practicable for testing in the NTF. For the configuration examined, the active sidewall boundary layer control techniques were found to be necessary only near the maximum lift conditions.

  10. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.

  11. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  12. Computational modeling of multispectral remote sensing systems: Background investigations

    NASA Technical Reports Server (NTRS)

    Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic process of remote sensing has been developed based upon the results of the investigations presented. The model is used in studying concepts for improving worldwide environment and resource monitoring. A review of various atmospheric radiative transfer models is presented as well as details of the selected model. Functional forms for spectral diffuse reflectance with variability introduced are also presented. A cloud detection algorithm and the stochastic nature of remote sensing data with its implications are considered.

  13. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  14. Cognitive control in majority search: a computational modeling approach.

    PubMed

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function.

  15. Subject-Specific Computational Modeling of Evoked Rabbit Phonation

    PubMed Central

    Chang, Siyuan; Novaleski, Carolyn K.; Kojima, Tsuyoshi; Mizuta, Masanobu; Luo, Haoxiang; Rousseau, Bernard

    2016-01-01

    When developing high-fidelity computational model of vocal fold vibration for voice production of individuals, one would run into typical issues of unknown model parameters and model validation of individual-specific characteristics of phonation. In the current study, the evoked rabbit phonation is adopted to explore some of these issues. In particular, the mechanical properties of the rabbit's vocal fold tissue are unknown for individual subjects. In the model, we couple a 3D vocal fold model that is based on the magnetic resonance (MR) scan of the rabbit larynx and a simple one-dimensional (1D) model for the glottal airflow to perform fast simulations of the vocal fold dynamics. This hybrid three-dimensional (3D)/1D model is then used along with the experimental measurement of each individual subject for determination of the vocal fold properties. The vibration frequency and deformation amplitude from the final model are matched reasonably well for individual subjects. The modeling and validation approaches adopted here could be useful for future development of subject-specific computational models of vocal fold vibration. PMID:26592748

  16. Cognitive Control in Majority Search: A Computational Modeling Approach

    PubMed Central

    Wang, Hongbin; Liu, Xun; Fan, Jin

    2011-01-01

    Despite the importance of cognitive control in many cognitive tasks involving uncertainty, the computational mechanisms of cognitive control in response to uncertainty remain unclear. In this study, we develop biologically realistic neural network models to investigate the instantiation of cognitive control in a majority function task, where one determines the category to which the majority of items in a group belong. Two models are constructed, both of which include the same set of modules representing task-relevant brain functions and share the same model structure. However, with a critical change of a model parameter setting, the two models implement two different underlying algorithms: one for grouping search (where a subgroup of items are sampled and re-sampled until a congruent sample is found) and the other for self-terminating search (where the items are scanned and counted one-by-one until the majority is decided). The two algorithms hold distinct implications for the involvement of cognitive control. The modeling results show that while both models are able to perform the task, the grouping search model fit the human data better than the self-terminating search model. An examination of the dynamics underlying model performance reveals how cognitive control might be instantiated in the brain for computing the majority function. PMID:21369357

  17. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    DTIC Science & Technology

    2006-08-01

    This isolates the skateboard as the one that doesn’t belong. Certain automatic, attention-shifting mechanisms will be required in our model . We...STINFO COPY AFRL-HE-WP-TR-2006-0160 Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model Robert G. Eggleston...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching

  18. Mathematical model partitioning and packing for parallel computer calculation

    NASA Technical Reports Server (NTRS)

    Arpasi, Dale J.; Milner, Edward J.

    1986-01-01

    This paper deals with the development of multiprocessor simulations from a serial set of ordinary differential equations describing a physical system. The identification of computational parallelism within the model equations is discussed. A technique is presented for identifying this parallelism and for partitioning the equations for parallel solution on a multiprocessor. Next, an algorithm which packs the equations into a minimum number of processors is described. The results of applying the packing algorithm to a turboshaft engine model are presented.

  19. Comparison of Numerical Models for Computing Underwater Light Fields

    DTIC Science & Technology

    1994-04-01

    L is an analytic process, and thus there are body. Depth integration of the Riccati equations (by no Monte Carlo fluctuations in the computed radi - a...1)gj(T) C. Model MCI [ Monte Carlo (Gordon)] i=m This model simulates radiative transfer in both the ( M - i)! ocean and the atmosphere, as coupled...Kattawar and C. Adams, " Radiative transfer in spherical (1974). shell atmospheres. II. asymmetric phase functions ," Icarus 38. R. Stavn and A

  20. A computer model of global thermospheric winds and temperatures

    NASA Technical Reports Server (NTRS)

    Killeen, T. L.; Roble, R. G.; Spencer, N. W.

    1987-01-01

    Output data from the NCAR Thermospheric GCM and a vector-spherical-harmonic (VSH) representation of the wind field are used in constructing a computer model of time-dependent global horizontal vector neutral wind and temperature fields at altitude 130-300 km. The formulation of the VSH model is explained in detail, and some typical results obtained with a preliminary version (applicable to December solstice at solar maximum) are presented graphically. Good agreement with DE-2 satellite measurements is demonstrated.

  1. A User Modelling Approach for Computer-Based Critiquing

    DTIC Science & Technology

    1990-01-01

    9.1.2 Explicit Acquisition Methods ................. 176 9.1.3 Tutoring-based Methods ..................... 177 9.1.4 Statistical Analysis of User’s...accomplish the second process are the subject of this research . Cooperative problem solving systems assume that the third process is inherent in the...process of human-computer interaction. The second class of models above are psychological models developed by and for the analysis of human behavior

  2. Computer modelling of the Hamamatsu R11410-20 PMT

    NASA Astrophysics Data System (ADS)

    Akimov, D. Yu; Kozlova, E. S.; Melikyan, Y. A.

    2017-01-01

    A computer model of operation of Hamamatsu R11410-20 photomultiplier based on SPICE software package has been developed. The PMT amplification process is simulated with the use of voltage and current controlled sources of current. Boundaries of linear zone were obtained for high anode current (with respect to the base current) operation regime. The results of simulation are in reasonable agreement with the experimentally measured PMT characteristics. The model can be used for simulation of any type of PMT.

  3. Enabling computer models of the heart for high-performance computers and the grid.

    PubMed

    Pitt-Francis, Joe; Garny, Alan; Gavaghan, David

    2006-06-15

    Although it is now feasible to compute multi-cellular models of the heart on a personal desktop or laptop computer, it is not feasible to undertake the detailed sweeps of high-dimensional parameter spaces required if we are to undertake in silico experimentation of the complex processes that constitute heart disease. For this research, modelling requirements move rapidly beyond the limit of commodity computers' resource both in terms of their memory footprint and the speed of calculation, so that multi-processor architectures must be considered. In addition, as such models have become more mature and have been validated against experimental data, there is increasing pressure for experimentalists to be able to make use of these models themselves as a key tool for hypothesis formulation and in planning future experimental studies to test those hypotheses. This paper discusses our initial experiences in a large-scale project (the Integrative Biology (IB) e-Science project) aimed at meeting these dual aims. We begin by putting the research in context by describing in outline the overall aims of the IB project, in particular focusing on the challenge of enabling novice users to make full use of high-performance resources without the need to gain detailed technical expertise in computing. We then discuss our experience of adapting one particular heart modelling package, Cellular Open Resource, and show how the solving engine of this code was dissected from the rest of the package, ported to C++ and parallelized using the Message-Passing Interface. We show that good parallel efficiency and realistic memory reduction can be achieved on simple geometries. We conclude by discussing lessons learnt in this process.

  4. A System Computational Model of Implicit Emotional Learning

    PubMed Central

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898

  5. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    SciTech Connect

    Wang, P; Song, Y T; Chao, Y; Zhang, H

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.

  6. Frequency response modeling and control of flexible structures: Computational methods

    NASA Technical Reports Server (NTRS)

    Bennett, William H.

    1989-01-01

    The dynamics of vibrations in flexible structures can be conventiently modeled in terms of frequency response models. For structural control such models capture the distributed parameter dynamics of the elastic structural response as an irrational transfer function. For most flexible structures arising in aerospace applications the irrational transfer functions which arise are of a special class of pseudo-meromorphic functions which have only a finite number of right half place poles. Computational algorithms are demonstrated for design of multiloop control laws for such models based on optimal Wiener-Hopf control of the frequency responses. The algorithms employ a sampled-data representation of irrational transfer functions which is particularly attractive for numerical computation. One key algorithm for the solution of the optimal control problem is the spectral factorization of an irrational transfer function. The basis for the spectral factorization algorithm is highlighted together with associated computational issues arising in optimal regulator design. Options for implementation of wide band vibration control for flexible structures based on the sampled-data frequency response models is also highlighted. A simple flexible structure control example is considered to demonstrate the combined frequency response modeling and control algorithms.

  7. Computer model to simulate testing at the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Owens, Lewis R., Jr.; Wahls, Richard A.; Hannon, Judith A.

    1995-01-01

    A computer model has been developed to simulate the processes involved in the operation of the National Transonic Facility (NTF), a large cryogenic wind tunnel at the Langley Research Center. The simulation was verified by comparing the simulated results with previously acquired data from three experimental wind tunnel test programs in the NTF. The comparisons suggest that the computer model simulates reasonably well the processes that determine the liquid nitrogen (LN2) consumption, electrical consumption, fan-on time, and the test time required to complete a test plan at the NTF. From these limited comparisons, it appears that the results from the simulation model are generally within about 10 percent of the actual NTF test results. The use of actual data acquisition times in the simulation produced better estimates of the LN2 usage, as expected. Additional comparisons are needed to refine the model constants. The model will typically produce optimistic results since the times and rates included in the model are typically the optimum values. Any deviation from the optimum values will lead to longer times or increased LN2 and electrical consumption for the proposed test plan. Computer code operating instructions and listings of sample input and output files have been included.

  8. Computational aeroelastic modelling of airframes and turbomachinery: progress and challenges.

    PubMed

    Bartels, R E; Sayma, A I

    2007-10-15

    Computational analyses such as computational fluid dynamics and computational structural dynamics have made major advances towards maturity as engineering tools. Computational aeroelasticity (CAE) is the integration of these disciplines. As CAE matures, it also finds an increasing role in the design and analysis of aerospace vehicles. This paper presents a survey of the current state of CAE with a discussion of recent research, success and continuing challenges in its progressive integration into multidisciplinary aerospace design. It approaches CAE from the perspective of the two main areas of application: airframe and turbomachinery design. An overview will be presented of the different prediction methods used for each field of application. Differing levels of nonlinear modelling will be discussed with insight into accuracy versus complexity and computational requirements. Subjects will include current advanced methods (linear and nonlinear), nonlinear flow models, use of order reduction techniques and future trends in incorporating structural nonlinearity. Examples in which CAE is currently being integrated into the design of airframes and turbomachinery will be presented.

  9. Challenges for the CMS computing model in the first year

    SciTech Connect

    Fisk, I.; /Fermilab

    2009-05-01

    CMS is in the process of commissioning a complex detector and a globally distributed computing infrastructure simultaneously. This represents a unique challenge. Even at the beginning there is not sufficient analysis or organized processing resources at CERN alone. In this presentation we discuss the unique computing challenges CMS expects to face during the first year of running and how they influence the baseline computing model decisions. During the early accelerator commissioning periods, CMS will attempt to collect as many events as possible when the beam is on in order to provide adequate early commissioning data. Some of these plans involve overdriving the Tier-0 infrastructure during data collection with recovery when the beam is off. In addition to the larger number of triggered events, there will be pressure in the first year to collect and analyze more complete data formats as the summarized formats mature. The large event formats impact the required storage, bandwidth, and processing capacity across all the computing centers. While the understanding of the detector and the event selections is being improved, there will likely be a larger number of reconstruction passes and skims performed by both central operations and individual users. We discuss how these additional stresses impact the allocation of resources and the changes from the baseline computing model.

  10. A Series of Molecular Dynamics and Homology Modeling Computer Labs for an Undergraduate Molecular Modeling Course

    ERIC Educational Resources Information Center

    Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.

    2010-01-01

    As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…

  11. Computational Modeling of Semiconductor Dynamics at Femtosecond Time Scales

    NASA Technical Reports Server (NTRS)

    Agrawal, Govind P.; Goorjian, Peter M.

    1998-01-01

    The Interchange No. NCC2-5149 deals with the emerging technology of photonic (or optoelectronic) integrated circuits (PICs or OEICs). In PICs, optical and electronic components are grown together on the same chip. To build such devices and subsystems, one needs to model the entire chip. PICs are useful for building components for integrated optical transmitters, integrated optical receivers, optical data storage systems, optical interconnects, and optical computers. For example, the current commercial rate for optical data transmission is 2.5 gigabits per second, whereas the use of shorter pulses to improve optical transmission rates would yield an increase of 400 to 1000 times. The improved optical data transmitters would be used in telecommunications networks and computer local-area networks. Also, these components can be applied to activities in space, such as satellite to satellite communications, when the data transmissions are made at optical frequencies. The research project consisted of developing accurate computer modeling of electromagnetic wave propagation in semiconductors. Such modeling is necessary for the successful development of PICs. More specifically, these computer codes would enable the modeling of such devices, including their subsystems, such as semiconductor lasers and semiconductor amplifiers in which there is femtosecond pulse propagation. Presently, there are no computer codes that could provide this modeling. Current codes do not solve the full vector, nonlinear, Maxwell's equations, which are required for these short pulses and also current codes do not solve the semiconductor Bloch equations, which are required to accurately describe the material's interaction with femtosecond pulses. The research performed under NCC2-5149 solves the combined Maxwell's and Bloch's equations.

  12. How computational models can help unlock biological systems.

    PubMed

    Brodland, G Wayne

    2015-12-01

    With computation models playing an ever increasing role in the advancement of science, it is important that researchers understand what it means to model something; recognize the implications of the conceptual, mathematical and algorithmic steps of model construction; and comprehend what models can and cannot do. Here, we use examples to show that models can serve a wide variety of roles, including hypothesis testing, generating new insights, deepening understanding, suggesting and interpreting experiments, tracing chains of causation, doing sensitivity analyses, integrating knowledge, and inspiring new approaches. We show that models can bring together information of different kinds and do so across a range of length scales, as they do in multi-scale, multi-faceted embryogenesis models, some of which connect gene expression, the cytoskeleton, cell properties, tissue mechanics, morphogenetic movements and phenotypes. Models cannot replace experiments nor can they prove that particular mechanisms are at work in a given situation. But they can demonstrate whether or not a proposed mechanism is sufficient to produce an observed phenomenon. Although the examples in this article are taken primarily from the field of embryo mechanics, most of the arguments and discussion are applicable to any form of computational modelling.

  13. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between

  14. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for

  15. Team Modelling: Review of Experimental Scenarios and Computational Models

    DTIC Science & Technology

    2006-09-01

    designed to be) Yes Yes No Yes (Model individuals, or sub-teams - groups of individuals.) 19 C3TRACE* (Command, Control, and Communicatio ...radar sensors, satellites, c2 structures, jammers, communicatio ns networks and devices, and fire support) Depends (EADSIM normally models at...researchers and corporations around the world. Soar has been under development for over 20 years and has been used in major military

  16. A tool for modeling concurrent real-time computation

    NASA Technical Reports Server (NTRS)

    Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.

    1990-01-01

    Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.

  17. Human performance models for computer-aided engineering

    NASA Technical Reports Server (NTRS)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  18. Computational Science Research in Support of Petascale Electromagnetic Modeling

    SciTech Connect

    Lee, L.-Q.; Akcelik, V; Ge, L; Chen, S; Schussman, G; Candel, A; Li, Z; Xiao, L; Kabel, A; Uplenchwar, R; Ng, C; Ko, K; /SLAC

    2008-06-20

    Computational science research components were vital parts of the SciDAC-1 accelerator project and are continuing to play a critical role in newly-funded SciDAC-2 accelerator project, the Community Petascale Project for Accelerator Science and Simulation (ComPASS). Recent advances and achievements in the area of computational science research in support of petascale electromagnetic modeling for accelerator design analysis are presented, which include shape determination of superconducting RF cavities, mesh-based multilevel preconditioner in solving highly-indefinite linear systems, moving window using h- or p- refinement for time-domain short-range wakefield calculations, and improved scalable application I/O.

  19. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  20. Applying computer simulation models as learning tools in fishery management

    USGS Publications Warehouse

    Johnson, B.L.

    1995-01-01

    Computer models can be powerful tools for addressing many problems in fishery management, but uncertainty about how to apply models and how they should perform can lead to a cautious approach to modeling. Within this approach, we expect models to make quantitative predictions but only after all model inputs have been estimated from empirical data and after the model has been tested for agreement with an independent data set. I review the limitations to this approach and show how models can be more useful as tools for organizing data and concepts, learning about the system to be managed, and exploring management options. Fishery management requires deciding what actions to pursue to meet management objectives. Models do not make decisions for us but can provide valuable input to the decision-making process. When empirical data are lacking, preliminary modeling with parameters derived from other sources can help determine priorities for data collection. When evaluating models for management applications, we should attempt to define the conditions under which the model is a useful, analytical tool (its domain of applicability) and should focus on the decisions made using modeling results, rather than on quantitative model predictions. I describe an example of modeling used as a learning tool for the yellow perch Perca flavescens fishery in Green Bay, Lake Michigan.

  1. Data visualization optimization via computational modeling of perception.

    PubMed

    Pineo, Daniel; Ware, Colin

    2012-02-01

    We present a method for automatically evaluating and optimizing visualizations using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and primary visual cortex. The neural activity resulting from viewing flow visualizations is simulated and evaluated to produce a metric of visualization effectiveness. Visualization optimization is achieved by applying this effectiveness metric as the utility function in a hill-climbing algorithm. We apply this method to the evaluation and optimization of 2D flow visualizations, using two visualization parameterizations: streaklet-based and pixel-based. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment. It had been previously hypothesized the effectiveness of head-to-tail alignment results from the perceptual processing of the visual system, but this theory had not been computationally modeled. A second optimization using a pixel-based parameterization resulted in a LIC-like result. The implications in terms of the selection of primitives is discussed. We argue that computational models can be used for optimizing complex visualizations. In addition, we argue that they can provide a means of computationally evaluating perceptual theories of visualization, and as a method for quality control of display methods.

  2. Computational modeling of cardiac hemodynamics: Current status and future outlook

    NASA Astrophysics Data System (ADS)

    Mittal, Rajat; Seo, Jung Hee; Vedula, Vijay; Choi, Young J.; Liu, Hang; Huang, H. Howie; Jain, Saurabh; Younes, Laurent; Abraham, Theodore; George, Richard T.

    2016-01-01

    The proliferation of four-dimensional imaging technologies, increasing computational speeds, improved simulation algorithms, and the widespread availability of powerful computing platforms is enabling simulations of cardiac hemodynamics with unprecedented speed and fidelity. Since cardiovascular disease is intimately linked to cardiovascular hemodynamics, accurate assessment of the patient's hemodynamic state is critical for the diagnosis and treatment of heart disease. Unfortunately, while a variety of invasive and non-invasive approaches for measuring cardiac hemodynamics are in widespread use, they still only provide an incomplete picture of the hemodynamic state of a patient. In this context, computational modeling of cardiac hemodynamics presents as a powerful non-invasive modality that can fill this information gap, and significantly impact the diagnosis as well as the treatment of cardiac disease. This article reviews the current status of this field as well as the emerging trends and challenges in cardiovascular health, computing, modeling and simulation and that are expected to play a key role in its future development. Some recent advances in modeling and simulations of cardiac flow are described by using examples from our own work as well as the research of other groups.

  3. Financial planning and computer modeling in dental practice.

    PubMed

    Feldman, C A

    1986-10-01

    The financial plan describes the practice's financial strategy, projects the strategy's future effect on the practice, and establishes goals by which the practice's manager can measure subsequent performance. The act of putting together a financial plan is called the financial planning process. It is a process that consists of analyzing the practice; projecting future outcomes of decisions that have to be made regarding finances, investments, and day to day operations; deciding which alternatives to undertake; and measuring performance against goals that are established in the financial plan. Computer financial planning models can aid the practice manager in projecting future outcomes of various financial, investment, and operational decisions. These models can be created inexpensively by noncomputer programmers with the aid of computer software on the market today. The financial planning process for a hypothetical practice was summarized, and the financial model used to test out various alternatives available to the practice was described.

  4. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    NASA Astrophysics Data System (ADS)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2017-02-01

    This work deals with an extension of the reducedorder models (ROMs) that are classically constructed by modal analysis in linear structural dynamics for which the computational models are assumed to be uncertain. It is based on a multilevel projection strategy consisting in introducing three reduced-order bases that are obtained by using a spatial filtering methodology of local displacements. This filtering involves global shape functions for the kinetic energy. The proposed multilevel stochastic ROM is constructed by using the nonparametric probabilistic approach of uncertainties. It allows for affecting a specific level of uncertainties to each type of displacements associated with the corresponding vibration regime. The proposed methodology is applied to the computational model of an automobile structure, for which the multilevel stochastic ROM is identified with respect to experimental measurements. This identification is performed by solving a statistical inverse problem.

  5. An empirical generative framework for computational modeling of language acquisition.

    PubMed

    Waterfall, Heidi R; Sandbank, Ben; Onnis, Luca; Edelman, Shimon

    2010-06-01

    This paper reports progress in developing a computer model of language acquisition in the form of (1) a generative grammar that is (2) algorithmically learnable from realistic corpus data, (3) viable in its large-scale quantitative performance and (4) psychologically real. First, we describe new algorithmic methods for unsupervised learning of generative grammars from raw CHILDES data and give an account of the generative performance of the acquired grammars. Next, we summarize findings from recent longitudinal and experimental work that suggests how certain statistically prominent structural properties of child-directed speech may facilitate language acquisition. We then present a series of new analyses of CHILDES data indicating that the desired properties are indeed present in realistic child-directed speech corpora. Finally, we suggest how our computational results, behavioral findings, and corpus-based insights can be integrated into a next-generation model aimed at meeting the four requirements of our modeling framework.

  6. Computational Modeling of Liquid and Gaseous Control Valves

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Moore, Arden; Sulyma, Peter

    2005-01-01

    In this paper computational modeling efforts undertaken at NASA Stennis Space Center in support of rocket engine component testing are discussed. Such analyses include structurally complex cryogenic liquid valves and gas valves operating at high pressures and flow rates. Basic modeling and initial successes are documented, and other issues that make valve modeling at SSC somewhat unique are also addressed. These include transient behavior, valve stall, and the determination of flow patterns in LOX valves. Hexahedral structured grids are used for valves that can be simplifies through the use of axisymmetric approximation. Hybrid unstructured methodology is used for structurally complex valves that have disparate length scales and complex flow paths that include strong swirl, local recirculation zones/secondary flow effects. Hexahedral (structured), unstructured, and hybrid meshes are compared for accuracy and computational efficiency. Accuracy is determined using verification and validation techniques.

  7. A Model-Based Case for Redundant Computation

    SciTech Connect

    Stearley, Jon R.; Robinson, David Gerald; Ferreira, Kurt Brian; Riesen, Rolf

    2011-08-01

    Despite its seemingly nonsensical cost, we show through modeling and simulation that redundant computation merits full consideration as a resilience strategy for next-generation systems. Without revolutionary breakthroughs in failure rates, part counts, or stable-storage bandwidths, it has been shown that the utility of Exascale systems will be crushed by the overheads of traditional checkpoint/restart mechanisms. Alternate resilience strategies must be considered, and redundancy is a proven unrivaled approach in many domains. We develop a distribution-independent model for job interrupts on systems of arbitrary redundancy, adapt Daly’s model for total application runtime, and find that his estimate for optimal checkpoint interval remains valid for redundant systems. We then identify conditions where redundancy is more cost effective than non-redundancy. These are done in the context of the number one supercomputers of the last decade, showing that thorough consideration of redundant computation is timely - if not overdue.

  8. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  9. NASA Trapezoidal Wing Computations Including Transition and Advanced Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Lee-Rausch, E. M.

    2012-01-01

    Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

  10. Images as drivers of progress in cardiac computational modelling

    PubMed Central

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A.; Bishop, Martin J.; Schneider, Jürgen E.; Kohl, Peter; Grau, Vicente

    2014-01-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved. PMID:25117497

  11. Simulation models for computational plasma physics: Concluding report

    NASA Astrophysics Data System (ADS)

    Hewett, D. W.

    1994-03-01

    In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell's equations. This low frequency model, called the streamlined Darwin field model, has now been implemented in a fully non-neutral SDF code BEAGLE and has been further extended to the quasi-neutral limit. In addition, they have resurrected the quasi-neutral, zero electron inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work. Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer, thus opening the door for the use of all their ADI schemes on these new computer architecture's.

  12. Reproducibility and Comparability of Computational Models for Astrocyte Calcium Excitability

    PubMed Central

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2017-01-01

    The scientific community across all disciplines faces the same challenges of ensuring accessibility, reproducibility, and efficient comparability of scientific results. Computational neuroscience is a rapidly developing field, where reproducibility and comparability of research results have gained increasing interest over the past years. As the number of computational models of brain functions is increasing, we chose to address reproducibility using four previously published computational models of astrocyte excitability as an example. Although not conventionally taken into account when modeling neuronal systems, astrocytes have been shown to take part in a variety of in vitro and in vivo phenomena including synaptic transmission. Two of the selected astrocyte models describe spontaneous calcium excitability, and the other two neurotransmitter-evoked calcium excitability. We specifically addressed how well the original simulation results can be reproduced with a reimplementation of the models. Additionally, we studied how well the selected models can be reused and whether they are comparable in other stimulation conditions and research settings. Unexpectedly, we found out that three of the model publications did not give all the necessary information required to reimplement the models. In addition, we were able to reproduce the original results of only one of the models completely based on the information given in the original publications and in the errata. We actually found errors in the equations provided by two of the model publications; after modifying the equations accordingly, the original results were reproduced more accurately. Even though the selected models were developed to describe the same biological event, namely astrocyte calcium excitability, the models behaved quite differently compared to one another. Our findings on a specific set of published astrocyte models stress the importance of proper validation of the models against experimental wet

  13. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  14. Benchmarking computational fluid dynamics models for lava flow simulation

    NASA Astrophysics Data System (ADS)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  15. The rise of machine consciousness: studying consciousness with computational models.

    PubMed

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises

  16. A New Perspective for the Calibration of Computational Predictor Models.

    SciTech Connect

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  17. Group theory and biomolecular conformation: I. Mathematical and computational models

    PubMed Central

    Chirikjian, Gregory S

    2010-01-01

    Biological macromolecules, and the complexes that they form, can be described in a variety of ways ranging from quantum mechanical and atomic chemical models, to coarser grained models of secondary structure and domains, to continuum models. At each of these levels, group theory can be used to describe both geometric symmetries and conformational motion. In this survey, a detailed account is provided of how group theory has been applied across computational structural biology to analyze the conformational shape and motion of macromolecules and complexes. PMID:20827378

  18. Modeling the complete Otto cycle: Preliminary version. [computer programming

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1977-01-01

    A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.

  19. Computational Modeling of Carbon Nanostructures for Energy Storage Applications

    SciTech Connect

    Feng, Guang; Huang, Jingsong; Qiao, Rui; Sumpter, Bobby G; Meunier, Vincent

    2010-01-01

    We present a theoretical model for electrical double layers formed by ion adsorption in nanoscale carbon pores. In this work a combination of computational methods, including first-principles and classical modeling, are used to explain the onset of an anomalous increase in capacitance for small pores. The study highlights the key role played by pore curvature and nanoconfinement on the capacitance performance. We emphasize the role of modeling in providing a precise understanding of the processes responsible for capacitive energy storage, and how simulations can be used to enhance desired properties and suppress unwanted ones.

  20. Applications of computer modeling at Wrightson, Johnson, Haddon & Williams, Inc

    NASA Astrophysics Data System (ADS)

    Johnson, James A.; Seep, Benjamin C.

    2002-05-01

    Computer modeling has become useful as an investigative tool and as a client communication and explanation tool in the field of acoustical consulting. A variety of in-house developed and commercially available applications is in constant use at the firm of Wrightson, Johnson, Haddon & Williams. Examples likely to be demonstrated (depending on time) include use of digital filtering for building exterior noise reduction comparisons, a shell isolation rating (SIR) model, simple sound barrier programs, an HVAC spreadsheet, a visual sightline modeling tool, specular sound reflections in a semicircular arc, and some uses of CATT-acoustic auralizations.

  1. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    Technology Transfer Automated Retrieval System (TEKTRAN)

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  2. Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models

    SciTech Connect

    Williams, Brian J.; Marcy, Peter W.

    2012-06-07

    We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

  3. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  4. Computational Tools To Model Halogen Bonds in Medicinal Chemistry.

    PubMed

    Ford, Melissa Coates; Ho, P Shing

    2016-03-10

    The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry.

  5. Human operator identification model and related computer programs

    NASA Technical Reports Server (NTRS)

    Kessler, K. M.; Mohr, J. N.

    1978-01-01

    Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.

  6. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  7. KU-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Griffin, J. W.

    1980-01-01

    The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.

  8. A Computational and Mathematical Model for Device Induced Thrombosis

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Tao; Aubry, Nadine; Massoudi, Mehrdad; Antaki, James

    2015-11-01

    Based on the Sorenson's model of thrombus formation, a new mathematical model describing the process of thrombus growth is developed. In this model the blood is treated as a Newtonian fluid, and the transport and reactions of the chemical and biological species are modeled using CRD (convection-reaction-diffusion) equations. A computational fluid dynamic (CFD) solver for the mathematical model is developed using the libraries of OpenFOAM. Applying the CFD solver, several representative benchmark problems are studied: rapid thrombus growth in vivo by injecting Adenosine diphosphate (ADP) using iontophoretic method and thrombus growth in rectangular microchannel with a crevice which usually appears as a joint between components of devices and often becomes nidus of thrombosis. Very good agreements between the numerical and the experimental results validate the model and indicate its potential to study a host of complex and practical problems in the future, such as thrombosis in blood pumps and artificial lungs.

  9. COMPUTER MODEL AND SIMULATION OF A GLOVE BOX PROCESS

    SciTech Connect

    C. FOSTER; ET AL

    2001-01-01

    The development of facilities to deal with the disposition of nuclear materials at an acceptable level of Occupational Radiation Exposure (ORE) is a significant issue facing the nuclear community. One solution is to minimize the worker's exposure though the use of automated systems. However, the adoption of automated systems for these tasks is hampered by the challenging requirements that these systems must meet in order to be cost effective solutions in the hazardous nuclear materials processing environment. Retrofitting current glove box technologies with automation systems represents potential near-term technology that can be applied to reduce worker ORE associated with work in nuclear materials processing facilities. Successful deployment of automation systems for these applications requires the development of testing and deployment strategies to ensure the highest level of safety and effectiveness. Historically, safety tests are conducted with glove box mock-ups around the finished design. This late detection of problems leads to expensive redesigns and costly deployment delays. With wide spread availability of computers and cost effective simulation software it is possible to discover and fix problems early in the design stages. Computer simulators can easily create a complete model of the system allowing a safe medium for testing potential failures and design shortcomings. The majority of design specification is now done on computer and moving that information to a model is relatively straightforward. With a complete model and results from a Failure Mode Effect Analysis (FMEA), redesigns can be worked early. Additional issues such as user accessibility, component replacement, and alignment problems can be tackled early in the virtual environment provided by computer simulation. In this case, a commercial simulation package is used to simulate a lathe process operation at the Los Alamos National Laboratory (LANL). The Lathe process operation is indicative of

  10. Using Computational and Mechanical Models to Study Animal Locomotion

    PubMed Central

    Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas

    2012-01-01

    Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026

  11. Modeling fluid dynamics on type II quantum computers

    NASA Astrophysics Data System (ADS)

    Scoville, James; Weeks, David; Yepez, Jeffrey

    2006-03-01

    A quantum algorithm is presented for modeling the time evolution of density and flow fields governed by classical equations, such as the diffusion equation, the nonlinear Burgers equation, and the damped wave equation. The algorithm is intended to run on a type-II quantum computer, a parallel quantum computer consisting of a lattice of small type I quantum computers undergoing unitary evolution and interacting via information interchanges represented by an orthogonal matrices. Information is effectively transferred between adjacent quantum computers over classical communications channels because of controlled state demolition following local quantum mechanical qubit-qubit interactions within each quantum computer. The type-II quantum algorithm presented in this paper describes a methodology for generating quantum logic operations as a generalization of classical operations associated with finite-point group symmetries. The quantum mechanical evolution of multiple qubits within each node is described. Presented is a proof that the parallel quantum system obeys a finite-difference quantum Boltzman equation at the mesoscopic scale, leading in turn to various classical linear and nonlinear effective field theories at the macroscopic scale depending on the details of the local qubit-qubit interactions.

  12. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Sadegh, Mojtaba

    2013-07-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  13. A framework to establish credibility of computational models in biology.

    PubMed

    Patterson, Eann A; Whelan, Maurice P

    2016-10-01

    Computational models in biology and biomedical science are often constructed to aid people's understanding of phenomena or to inform decisions with socioeconomic consequences. Model credibility is the willingness of people to trust a model's predictions and is often difficult to establish for computational biology models. A 3 × 3 matrix has been proposed to allow such models to be categorised with respect to their testability and epistemic foundation in order to guide the selection of an appropriate process of validation to supply evidence to establish credibility. Three approaches to validation are identified that can be deployed depending on whether a model is deemed untestable, testable or lies somewhere in between. In the latter two cases, the validation process involves the quantification of uncertainty which is a key output. The issues arising due to the complexity and inherent variability of biological systems are discussed and the creation of 'digital twins' proposed as a means to alleviate the issues and provide a more robust, transparent and traceable route to model credibility and acceptance.

  14. Computational model of collagen turnover in carotid arteries during hypertension.

    PubMed

    Sáez, P; Peña, E; Tarbell, J M; Martínez, M A

    2015-02-01

    It is well known that biological tissues adapt their properties because of different mechanical and chemical stimuli. The goal of this work is to study the collagen turnover in the arterial tissue of hypertensive patients through a coupled computational mechano-chemical model. Although it has been widely studied experimentally, computational models dealing with the mechano-chemical approach are not. The present approach can be extended easily to study other aspects of bone remodeling or collagen degradation in heart diseases. The model can be divided into three different stages. First, we study the smooth muscle cell synthesis of different biological substances due to over-stretching during hypertension. Next, we study the mass-transport of these substances along the arterial wall. The last step is to compute the turnover of collagen based on the amount of these substances in the arterial wall which interact with each other to modify the turnover rate of collagen. We simulate this process in a finite element model of a real human carotid artery. The final results show the well-known stiffening of the arterial wall due to the increase in the collagen content.

  15. Computational Models of Auditory Scene Analysis: A Review.

    PubMed

    Szabó, Beáta T; Denham, Susan L; Winkler, István

    2016-01-01

    Auditory scene analysis (ASA) refers to the process (es) of parsing the complex acoustic input into auditory perceptual objects representing either physical sources or temporal sound patterns, such as melodies, which contributed to the sound waves reaching the ears. A number of new computational models accounting for some of the perceptual phenomena of ASA have been published recently. Here we provide a theoretically motivated review of these computational models, aiming to relate their guiding principles to the central issues of the theoretical framework of ASA. Specifically, we ask how they achieve the grouping and separation of sound elements and whether they implement some form of competition between alternative interpretations of the sound input. We consider the extent to which they include predictive processes, as important current theories suggest that perception is inherently predictive, and also how they have been evaluated. We conclude that current computational models of ASA are fragmentary in the sense that rather than providing general competing interpretations of ASA, they focus on assessing the utility of specific processes (or algorithms) for finding the causes of the complex acoustic signal. This leaves open the possibility for integrating complementary aspects of the models into a more comprehensive theory of ASA.

  16. Computational Models of Auditory Scene Analysis: A Review

    PubMed Central

    Szabó, Beáta T.; Denham, Susan L.; Winkler, István

    2016-01-01

    Auditory scene analysis (ASA) refers to the process (es) of parsing the complex acoustic input into auditory perceptual objects representing either physical sources or temporal sound patterns, such as melodies, which contributed to the sound waves reaching the ears. A number of new computational models accounting for some of the perceptual phenomena of ASA have been published recently. Here we provide a theoretically motivated review of these computational models, aiming to relate their guiding principles to the central issues of the theoretical framework of ASA. Specifically, we ask how they achieve the grouping and separation of sound elements and whether they implement some form of competition between alternative interpretations of the sound input. We consider the extent to which they include predictive processes, as important current theories suggest that perception is inherently predictive, and also how they have been evaluated. We conclude that current computational models of ASA are fragmentary in the sense that rather than providing general competing interpretations of ASA, they focus on assessing the utility of specific processes (or algorithms) for finding the causes of the complex acoustic signal. This leaves open the possibility for integrating complementary aspects of the models into a more comprehensive theory of ASA. PMID:27895552

  17. Computational Systems Biology in Cancer: Modeling Methods and Applications

    PubMed Central

    Materi, Wayne; Wishart, David S.

    2007-01-01

    In recent years it has become clear that carcinogenesis is a complex process, both at the molecular and cellular levels. Understanding the origins, growth and spread of cancer, therefore requires an integrated or system-wide approach. Computational systems biology is an emerging sub-discipline in systems biology that utilizes the wealth of data from genomic, proteomic and metabolomic studies to build computer simulations of intra and intercellular processes. Several useful descriptive and predictive models of the origin, growth and spread of cancers have been developed in an effort to better understand the disease and potential therapeutic approaches. In this review we describe and assess the practical and theoretical underpinnings of commonly-used modeling approaches, including ordinary and partial differential equations, petri nets, cellular automata, agent based models and hybrid systems. A number of computer-based formalisms have been implemented to improve the accessibility of the various approaches to researchers whose primary interest lies outside of model development. We discuss several of these and describe how they have led to novel insights into tumor genesis, growth, apoptosis, vascularization and therapy. PMID:19936081

  18. The Importance of Formalizing Computational Models of Face Adaptation Aftereffects

    PubMed Central

    Ross, David A.; Palmeri, Thomas J.

    2016-01-01

    Face adaptation is widely used as a means to probe the neural representations that support face recognition. While the theories that relate face adaptation to behavioral aftereffects may seem conceptually simple, our work has shown that testing computational instantiations of these theories can lead to unexpected results. Instantiating a model of face adaptation not only requires specifying how faces are represented and how adaptation shapes those representations but also specifying how decisions are made, translating hidden representational states into observed responses. Considering the high-dimensionality of face representations, the parallel activation of multiple representations, and the non-linearity of activation functions and decision mechanisms, intuitions alone are unlikely to succeed. If the goal is to understand mechanism, not simply to examine the boundaries of a behavioral phenomenon or correlate behavior with brain activity, then formal computational modeling must be a component of theory testing. To illustrate, we highlight our recent computational modeling of face adaptation aftereffects and discuss how models can be used to understand the mechanisms by which faces are recognized. PMID:27378960

  19. Analytical modeling of printed metasurface cavities for computational imaging

    NASA Astrophysics Data System (ADS)

    F. Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N.; Smith, David R.

    2016-10-01

    We derive simple analytical expressions to model the electromagnetic response of an electrically large printed cavity. The analytical model is then used to develop printed cavities for microwave imaging purposes. The proposed cavity is excited by a cylindrical source and has boundaries formed by subwavelength metallic cylinders (vias) placed at subwavelength distances apart. Given their small size, the electric currents induced on the vias are assumed to have no angular dependence. Applying this approximation simplifies the electromagnetic problem to a matrix equation which can be solved to directly compute the electric current induced on each via. Once the induced currents are known, the electromagnetic field inside the cavity can be computed for every location. We verify the analytical model by comparing its prediction to full-wave simulations. To utilize this cavity in imaging settings, we perforate one side of the printed cavity with radiative slots such that they act as the physical layer of a computational imaging system. An analytical approximation for the slots is also developed, enabling us to obtain estimates of the cavity performance in imaging scenarios. This ability allows us to make informed decisions on the design of the printed metasurface cavity. The utility of the proposed model is further highlighted by demonstrating high-quality experimental imaging; performance metrics, which are consistent between theory and experiment, are also estimated.

  20. Automatic computational models of acoustical category features: Talking versus singing

    NASA Astrophysics Data System (ADS)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  1. Mathematical modeling and computation of the optical response from nanostructures

    NASA Astrophysics Data System (ADS)

    Sun, Yuanchang

    This dissertation studies the computational modeling for nanostructures in response to external electromagnetic fields. Light-matter interactions on nanoscale are at the heart of nano-optics. To fully characterize the optical interactions with nanostructures quantum electrodynamics (QED) must be invoked, however, the required extremely intense computation and analysis prohibit QED from applications in nano-optics. To avoid the expensive computations and be able to seize the essential quantum effects a semiclassical model is developed. The wellposedness of the model partial differential equations is established. Emphasis is placed on the optical interactions with an individual nanostructure, excitons and biexcitons effects and finite-size effects are investigated. The crucial step of our model is to couple the electromagnetic fields with the motion of the excited particles to yield a new dielectric constant which contains quantum effects of interest. A novel feature of the dielectric constant is the wavevector-dependence which leads to a multi-wave propagation inside the medium. Additional boundary conditions are proposed to deal with this situation. We proceed with incorporating this dielectric constant to Maxwell's equations, and by solving a scattering problem the quantum effects can be captured in the scattered spectra.

  2. Computational modeling of flow-altering surgeries in basilar aneurysms.

    PubMed

    Rayz, V L; Abla, A; Boussel, L; Leach, J R; Acevedo-Bolton, G; Saloner, D; Lawton, M T

    2015-05-01

    In cases where surgeons consider different interventional options for flow alterations in the setting of pathological basilar artery hemodynamics, a virtual model demonstrating the flow fields resulting from each of these options can assist in making clinical decisions. In this study, image-based computational fluid dynamics (CFD) models were used to simulate the flow in four basilar artery aneurysms in order to evaluate postoperative hemodynamics that would result from flow-altering interventions. Patient-specific geometries were constructed using MR angiography and velocimetry data. CFD simulations carried out for the preoperative flow conditions were compared to in vivo phase-contrast MRI measurements (4D Flow MRI) acquired prior to the interventions. The models were then modified according to the procedures considered for each patient. Numerical simulations of the flow and virtual contrast transport were carried out in each case in order to assess postoperative flow fields and estimate the likelihood of intra-aneurysmal thrombus deposition following the procedures. Postoperative imaging data, when available, were used to validate computational predictions. In two cases, where the aneurysms involved vital pontine perforator arteries branching from the basilar artery, idealized geometries of these vessels were incorporated into the CFD models. The effect of interventions on the flow through the perforators was evaluated by simulating the transport of contrast in these vessels. The computational results were in close agreement with the MR imaging data. In some cases, CFD simulations could help determine which of the surgical options was likely to reduce the flow into the aneurysm while preserving the flow through the basilar trunk. The study demonstrated that image-based computational modeling can provide guidance to clinicians by indicating possible outcome complications and indicating expected success potential for ameliorating pathological aneurysmal flow

  3. A Sensorimotor Model for Computing Intended Reach Trajectories

    PubMed Central

    Üstün, Cevat

    2016-01-01

    The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. PMID:26985662

  4. MaRIE theory, modeling and computation roadmap executive summary

    SciTech Connect

    Lookman, Turab

    2010-01-01

    The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road map to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.

  5. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    DTIC Science & Technology

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  6. Model Selection in Historical Research Using Approximate Bayesian Computation

    PubMed Central

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  7. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  8. Computational Grounded Cognition: a new alliance between grounded cognition and computational modeling

    PubMed Central

    Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.

    2013-01-01

    Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition. PMID:23346065

  9. Assessing executive function using a computer game: computational modeling of cognitive processes.

    PubMed

    Hagler, Stuart; Jimison, Holly Brugge; Pavel, Misha

    2014-07-01

    Early and reliable detection of cognitive decline is one of the most important challenges of current healthcare. In this project, we developed an approach whereby a frequently played computer game can be used to assess a variety of cognitive processes and estimate the results of the pen-and-paper trail making test (TMT)--known to measure executive function, as well as visual pattern recognition, speed of processing, working memory, and set-switching ability. We developed a computational model of the TMT based on a decomposition of the test into several independent processes, each characterized by a set of parameters that can be estimated from play of a computer game designed to resemble the TMT. An empirical evaluation of the model suggests that it is possible to use the game data to estimate the parameters of the underlying cognitive processes and using the values of the parameters to estimate the TMT performance. Cognitive measures and trends in these measures can be used to identify individuals for further assessment, to provide a mechanism for improving the early detection of neurological problems, and to provide feedback and monitoring for cognitive interventions in the home.

  10. Computational Modeling as a Design Tool in Microelectronics Manufacturing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Plans to introduce pilot lines or fabs for 300 mm processing are in progress. The IC technology is simultaneously moving towards 0.25/0.18 micron. The convergence of these two trends places unprecedented stringent demands on processes and equipments. More than ever, computational modeling is called upon to play a complementary role in equipment and process design. The pace in hardware/process development needs a matching pace in software development: an aggressive move towards developing "virtual reactors" is desirable and essential to reduce design cycle and costs. This goal has three elements: reactor scale model, feature level model, and database of physical/chemical properties. With these elements coupled, the complete model should function as a design aid in a CAD environment. This talk would aim at the description of various elements. At the reactor level, continuum, DSMC(or particle) and hybrid models will be discussed and compared using examples of plasma and thermal process simulations. In microtopography evolution, approaches such as level set methods compete with conventional geometric models. Regardless of the approach, the reliance on empricism is to be eliminated through coupling to reactor model and computational surface science. This coupling poses challenging issues of orders of magnitude variation in length and time scales. Finally, database development has fallen behind; current situation is rapidly aggravated by the ever newer chemistries emerging to meet process metrics. The virtual reactor would be a useless concept without an accompanying reliable database that consists of: thermal reaction pathways and rate constants, electron-molecule cross sections, thermochemical properties, transport properties, and finally, surface data on the interaction of radicals, atoms and ions with various surfaces. Large scale computational chemistry efforts are critical as experiments alone cannot meet database needs due to the difficulties associated with such

  11. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  12. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  13. Photopolarimetry of scattering surfaces and their interpretation by computer model

    NASA Technical Reports Server (NTRS)

    Wolff, M.

    1979-01-01

    Wolff's computer model of a rough planetary surface was simplified and revised. Close adherence to the actual geometry of a pitted surface and the inclusion of a function for diffuse light resulted in a quantitative model comparable to observations by planetary satellites and asteroids. A function is also derived to describe diffuse light emitted from a particulate surface. The function is in terms of the indices of refraction of the surface material, particle size, and viewing angles. Computer-generated plots describe the observable and theoretical light components for the Moon, Mercury, Mars and a spectrum of asteroids. Other plots describe the effects of changing surface material properties. Mathematical results are generated to relate the parameters of the negative polarization branch to the properties of surface pitting. An explanation is offered for the polarization of the rings of Saturn, and the average diameter of ring objects is found to be 30 to 40 centimeters.

  14. Computational models of performance monitoring and cognitive control

    PubMed Central

    Alexander, William H.; Brown, Joshua W.

    2011-01-01

    The medial prefrontal cortex (mPFC) has been the subject of intense interest as a locus of cognitive control. Several computational models have been proposed to account for a range of effects including error detection, conflict monitoring, error likelihood prediction, and numerous other effects observed with single-unit neurophysiology, fMRI, and lesion studies. Here we review the state of computational models of cognitive control and offer a new theoretical synthesis of the mPFC as signaling response-outcome predictions. This new synthesis has two interacting components. The first component learns to predict the various possible outcomes of a planned action, and the second component detects discrepancies between the actual and intended responses; the detected discrepancies in turn update the outcome predictions. This single construct is consistent with a wide array of performance monitoring effects in mPFC and suggests a unifying account of the cognitive role of medial PFC in performance monitoring. PMID:21359126

  15. Computational modelling of cohesive cracks in material structures

    NASA Astrophysics Data System (ADS)

    Vala, J.; Jarošová, P.

    2016-06-01

    Analysis of crack formation, considered as the creation of new surfaces in a material sample due to its microstructure, leads to nontrivial physical, mathematical and computational difficulties even in the rather simple case of quasistatic cohesive zone modelling inside the linear elastic theory. However, quantitative results from such evaluations are required in practice for the development and design of advanced materials, structures and technologies. Although most available software tools apply ad hoc computational predictions, this paper presents the proper formulation of such model problem, including its verification, and sketches the more-scale construction of finite-dimensional approximation of solutions, utilizing the finite element or similar techniques, together with references to original simulations results from engineering practice.

  16. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  17. Advanced computer modeling techniques expand belt conveyor technology

    SciTech Connect

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  18. A computational model for doctoring fluid films in gravure printing

    NASA Astrophysics Data System (ADS)

    Hariprasad, Daniel S.; Grau, Gerd; Schunk, P. Randall; Tjiptowidjojo, Kristianto

    2016-04-01

    The wiping, or doctoring, process in gravure printing presents a fundamental barrier to resolving the micron-sized features desired in printed electronics applications. This barrier starts with the residual fluid film left behind after wiping, and its importance grows as feature sizes are reduced, especially as the feature size approaches the thickness of the residual fluid film. In this work, various mechanical complexities are considered in a computational model developed to predict the residual fluid film thickness. Lubrication models alone are inadequate, and deformation of the doctor blade body together with elastohydrodynamic lubrication must be considered to make the model predictive of experimental trends. Moreover, model results demonstrate that the particular form of the wetted region of the blade has a significant impact on the model's ability to reproduce experimental measurements.

  19. Visual computing model for immune system and medical system.

    PubMed

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  20. Computational Modeling of Auxin: A Foundation for Plant Engineering.

    PubMed

    Morales-Tapia, Alejandro; Cruz-Ramírez, Alfredo

    2016-01-01

    Since the development of agriculture, humans have relied on the cultivation of plants to satisfy our increasing demand for food, natural products, and other raw materials. As we understand more about plant development, we can better manipulate plants to fulfill our particular needs. Auxins are a class of simple metabolites that coordinate many developmental activities like growth and the appearance of functional structures in plants. Computational modeling of auxin has proven to be an excellent tool in elucidating many mechanisms that underlie these developmental events. Due to the complexity of these mechanisms, current modeling efforts are concerned only with single phenomena focused on narrow spatial and developmental contexts; but a general model of plant development could be assembled by integrating the insights from all of them. In this perspective, we summarize the current collection of auxin-driven computational models, focusing on how they could come together into a single model for plant development. A model of this nature would allow researchers to test hypotheses in silico and yield accurate predictions about the behavior of a plant under a given set of physical and biochemical constraints. It would also provide a solid foundation toward the establishment of plant engineering, a proposed discipline intended to enable the design and production of plants that exhibit an arbitrarily defined set of features.

  1. A computational language approach to modeling prose recall in schizophrenia.

    PubMed

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  2. Onshore wind farm modelling using high performance computing

    NASA Astrophysics Data System (ADS)

    Folch, A.; Avila, M.; Houzeaux, G.; Egukitza, B.; Prieto, L.

    2013-12-01

    Numerical modelling has become a basic tool for the design and exploitation of wind energy resources. The Barcelona Supercomputing Center, the national supercomputing facility in Spain, and Iberdrola Renovables S.A., an industry world leader in the sector, are working in close cooperation in the development of a Computational Fluid Dynamics (CFD) wind farm modelling strategy in the frame of high performance computing. The first objective is to optimize the design of wind farms using a k-e turbulence model specially designed for atmospheric flow in complex terrains. An actuator disk model is used to account for the wake effects of wind turbines that, for optimization of farm design, are placed using a chimera method. The model has been implemented in Alya, a multi-physics parallel code specially designed for massive parallel execution. The second objective, yet under development, is the short-term forecast of wind farm production considering a nest-down from mesoscale numerical weather prediction models to CFD in order to obtain high-resolution hourly wind fields.

  3. Computational Modeling of Auxin: A Foundation for Plant Engineering

    PubMed Central

    Morales-Tapia, Alejandro; Cruz-Ramírez, Alfredo

    2016-01-01

    Since the development of agriculture, humans have relied on the cultivation of plants to satisfy our increasing demand for food, natural products, and other raw materials. As we understand more about plant development, we can better manipulate plants to fulfill our particular needs. Auxins are a class of simple metabolites that coordinate many developmental activities like growth and the appearance of functional structures in plants. Computational modeling of auxin has proven to be an excellent tool in elucidating many mechanisms that underlie these developmental events. Due to the complexity of these mechanisms, current modeling efforts are concerned only with single phenomena focused on narrow spatial and developmental contexts; but a general model of plant development could be assembled by integrating the insights from all of them. In this perspective, we summarize the current collection of auxin-driven computational models, focusing on how they could come together into a single model for plant development. A model of this nature would allow researchers to test hypotheses in silico and yield accurate predictions about the behavior of a plant under a given set of physical and biochemical constraints. It would also provide a solid foundation toward the establishment of plant engineering, a proposed discipline intended to enable the design and production of plants that exhibit an arbitrarily defined set of features. PMID:28066453

  4. Using computational modeling to predict arrhythmogenesis and antiarrhythmic therapy

    PubMed Central

    Moreno, Jonathan D.; Clancy, Colleen E.

    2010-01-01

    The use of computational modeling to predict arrhythmia and arrhythmogensis is a relatively new field, but has nonetheless dramatically enhanced our understanding of the physiological and pathophysiological mechanisms that lead to arrhythmia. This review summarizes recent advances in the field of computational modeling approaches with a brief review of the evolution of cellular action potential models, and the incorporation of genetic mutations to understand fundamental arrhythmia mechanisms, including how simulations have revealed situation specific mechanisms leading to multiple phenotypes for the same genotype. The review then focuses on modeling drug blockade to understand how the less-than-intuitive effects some drugs have to either ameliorate or paradoxically exacerbate arrhythmia. Quantification of specific arrhythmia indicies are discussed at each spatial scale, from channel to tissue. The utility of hERG modeling to assess altered repolarization in response to drug blockade is also briefly discussed. Finally, insights gained from Ca2+ dynamical modeling and EC coupling, neurohumoral regulation of cardiac dynamics, and cell signaling pathways are also reviewed. PMID:20652086

  5. An overview of recent applications of computational modelling in neonatology

    PubMed Central

    Wrobel, Luiz C.; Ginalski, Maciej K.; Nowak, Andrzej J.; Ingham, Derek B.; Fic, Anna M.

    2010-01-01

    This paper reviews some of our recent applications of computational fluid dynamics (CFD) to model heat and mass transfer problems in neonatology and investigates the major heat and mass-transfer mechanisms taking place in medical devices, such as incubators, radiant warmers and oxygen hoods. It is shown that CFD simulations are very flexible tools that can take into account all modes of heat transfer in assisting neonatal care and improving the design of medical devices. PMID:20439275

  6. Modeling of supersonic combustor flows using parallel computing

    NASA Technical Reports Server (NTRS)

    Riggins, D.; Underwood, M.; Mcmillin, B.; Reeves, L.; Lu, E. J.-L.

    1992-01-01

    While current 3D CFD codes and modeling techniques have been shown capable of furnishing engineering data for complex scramjet flowfields, the usefulness of such efforts is primarily limited by solutions' CPU time requirements, and secondarily by memory requirements. Attention is presently given to the use of parallel computing capabilities for engineering CFD tools for the analysis of supersonic reacting flows, and to an illustrative incompressible CFD problem using up to 16 iPSC/2 processors with single-domain decomposition.

  7. Probing Cosmic Infrared Sources: A Computer Modeling Approach

    DTIC Science & Technology

    1992-06-01

    Bits and Bytes.", NASA Goddard Space Right Center, Green Belt , Maryland (May 1990). 11. "Probing Infrared Sources by Computer Modeling.", review talk...is this expected from elementary theory, but many observations can be well described with this assumption ( Kuiper 1967; Kwok 1980; Sopka et al. 1985...Neugebauer, H. J. Habing, P. E. Clegg, & T. J. Chester (Washington: GPO) Kuiper , T. B. H. et al. 1976, Api, 204, 408 Kwan, J., Scoville, N. 1976, Api, 209, 102

  8. Modeling vision: computational science for understanding human visual perception.

    PubMed

    Mrowka, Ralf; Freytag, Alexander; Reuter, Stefanie

    2017-03-25

    Human visual perception system is complex and involves a considerable portion of the brain's cortex. Hence, the wish to understand complex neuronal function is obvious, and the idea to model this by means of artificial neuronal networks might have been born at the time when first computational machines were constructed (Alan Turing, Intelligent machinery, 1948, h t t p: //www.npl.co.uk/about/history/notable-individuals/turing/intelligent-machinery) This article is protected by copyright. All rights reserved.

  9. Computer-Aided Process Model For Carbon/Phenolic Materials

    NASA Technical Reports Server (NTRS)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  10. 98. View of IBM digital computer model 7090 magnet core ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  11. 97. View of International Business Machine (IBM) digital computer model ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    97. View of International Business Machine (IBM) digital computer model 7090 magnetic core installation, international telephone and telegraph (ITT) Artic Services Inc., Official photograph BMEWS site II, Clear, AK, by unknown photographer, 17 September 1965, BMEWS, clear as negative no. A-6604. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  12. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  13. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  14. A computational model for cancer growth by using complex networks

    NASA Astrophysics Data System (ADS)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  15. Computer modeling of lime-soda softening of cooling waters

    SciTech Connect

    Chen, J.C.Y.

    1986-01-01

    A computer model is developed to fully describe the lime soda ash softening process. This process has a long history of being used to remove calcium and magnesium hardness from cooling waters in order to prevent scaling on heat exchangers. Softening of makeup water and/or a sidestream from the recirculating water will allow a reduction in blowdown. In the extreme case, zero blowdown may be accomplished to conserve cooling waters and to save the costs of disposing of blowdown. Cooling waters differ from most natural waters in having higher temperature and higher concentration of dissolved solids, and, therefore, a higher ionic strength. These factors plus the effects of complex formation are taken into consideration in the development of the computer model. To determine the composition of a softened water, the model assumes that an equilibrium state is reached in a reactor, and employs the equations of mass action and mass balance. The resulting nonlinear simultaneous equations are then linearized by Taylor series expansion and solved by the multidimensional Newton-Raphson method. The computer predictions are compared to the results of laboratory studies using synthetic waters.

  16. Verification of computational models of cardiac electro-physiology.

    PubMed

    Pathmanathan, Pras; Gray, Richard A

    2014-05-01

    For computational models of cardiac activity to be used in safety-critical clinical decision-making, thorough and rigorous testing of the accuracy of predictions is required. The field of 'verification, validation and uncertainty quantification' has been developed to evaluate the credibility of computational predictions. The first stage, verification, is the evaluation of how well computational software correctly solves the underlying mathematical equations. The aim of this paper is to introduce novel methods for verifying multi-cellular electro-physiological solvers, a crucial first stage for solvers to be used with confidence in clinical applications. We define 1D-3D model problems with exact solutions for each of the monodomain, bidomain, and bidomain-with-perfusing-bath formulations of cardiac electro-physiology, which allow for the first time the testing of cardiac solvers against exact errors on fully coupled problems in all dimensions. These problems are carefully constructed so that they can be easily run using a general solver and can be used to greatly increase confidence that an implementation is correct, which we illustrate by testing one major solver, 'Chaste', on the problems. We then perform case studies on calculation verification (also known as solution verification) for two specific applications. We conclude by making several recommendations regarding verification in cardiac modelling.

  17. A review of literature and computer models on exposure assessment.

    PubMed

    Butta, T E; Clarkb, M; Coulone, F; Oduyemi, K O K

    2009-12-14

    At the present time, risk analysis is an effective management tool used by environmental managers to protect the environment from inevitable anthropogenic activities. There are generic elements in environmental risk assessments, which are independent of the subject to which risk analysis is applied. Examples of these elements are: baseline study, hazard identification, hazards' concentration assessment and risk quantification. Another important example of such generic elements is exposure assessment, which is required in a risk analysis process for landfill leachate as it would in any other environmental risk issue. Furthermore, computer models are also being developed to assist risk analysis in different fields. However, in the review of current computer models and literature, particularly regarding landfills, the authors have found no evidence for the existence of a holistic exposure assessment procedure underpinned with a computational method for landfill leachate. This paper, with reference to the relevant literature and models reviewed, discusses the extent to which exposure assessment is absent in landfill risk assessment approaches. The study also indicates a number of factors and features that should be added to the exposure assessment system in order to render it more strategic, thereby enhancing the quantitative risk analysis.

  18. Parallel computer processing and modeling: applications for the ICU

    NASA Astrophysics Data System (ADS)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  19. Surface Modeling, Solid Modeling and Finite Element Modeling. Analysis Capabilities of Computer-Assisted Design and Manufacturing Systems.

    ERIC Educational Resources Information Center

    Nee, John G.; Kare, Audhut P.

    1987-01-01

    Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)

  20. Towards a computational model of actor-based language comprehension.

    PubMed

    Alday, Phillip M; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2014-01-01

    Neurophysiological data from a range of typologically diverse languages provide evidence for a cross-linguistically valid, actor-based strategy of understanding sentence-level meaning. This strategy seeks to identify the participant primarily responsible for the state of affairs (the actor) as quickly and unambiguously as possible, thus resulting in competition for the actor role when there are multiple candidates. Due to its applicability across languages with vastly different characteristics, we have proposed that the actor strategy may derive from more basic cognitive or neurobiological organizational principles, though it is also shaped by distributional properties of the linguistic input (e.g. the morphosyntactic coding strategies for actors in a given language). Here, we describe an initial computational model of the actor strategy and how it interacts with language-specific properties. Specifically, we contrast two distance metrics derived from the output of the computational model (one weighted and one unweighted) as potential measures of the degree of competition for actorhood by testing how well they predict modulations of electrophysiological activity engendered by language processing. To this end, we present an EEG study on word order processing in German and use linear mixed-effects models to assess the effect of the various distance metrics. Our results show that a weighted metric, which takes into account the weighting of an actor-identifying feature in the language under consideration outperforms an unweighted distance measure. We conclude that actor competition effects cannot be reduced to feature overlap between multiple sentence participants and thereby to the notion of similarity-based interference, which is prominent in current memory-based models of language processing. Finally, we argue that, in addition to illuminating the underlying neurocognitive mechanisms of actor competition, the present model can form the basis for a more comprehensive