A computational model of selection by consequences.
McDowell, J J
2004-05-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior.
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512
ERIC Educational Resources Information Center
Nosik, Melissa R.; Williams, W. Larry; Garrido, Natalia; Lee, Sarah
2013-01-01
In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following…
Consistent improvements in processor speed and computer access have substantially increased the use of computer modeling by experts and non-experts alike. Several new computer modeling packages operating under graphical operating systems (i.e. Microsoft Windows or Macintosh) m...
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Hybrid Computation at Louisiana State University.
ERIC Educational Resources Information Center
Corripio, Armando B.
Hybrid computation facilities have been in operation at Louisiana State University since the spring of 1969. In part, they consist of an Electronics Associates, Inc. (EAI) Model 680 analog computer, an EAI Model 693 interface, and a Xerox Data Systems (XDS) Sigma 5 digital computer. The hybrid laboratory is used in a course on hybrid computation…
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.
1998-01-01
The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.
A computational model of selection by consequences: log survivor plots.
Kulubekova, Saule; McDowell, J J
2008-06-01
[McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.
1993-01-01
The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
Williams, Kent E; Voigt, Jeffrey R
2004-01-01
The research reported herein presents the results of an empirical evaluation that focused on the accuracy and reliability of cognitive models created using a computerized tool: the cognitive analysis tool for human-computer interaction (CAT-HCI). A sample of participants, expert in interacting with a newly developed tactical display for the U.S. Army's Bradley Fighting Vehicle, individually modeled their knowledge of 4 specific tasks employing the CAT-HCI tool. Measures of the accuracy and consistency of task models created by these task domain experts using the tool were compared with task models created by a double expert. The findings indicated a high degree of consistency and accuracy between the different "single experts" in the task domain in terms of the resultant models generated using the tool. Actual or potential applications of this research include assessing human-computer interaction complexity, determining the productivity of human-computer interfaces, and analyzing an interface design to determine whether methods can be automated.
A Computational Model of Selection by Consequences
ERIC Educational Resources Information Center
McDowell, J. J.
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…
An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.
ERIC Educational Resources Information Center
Skakun, Ernest N.; Hakstian, A. Ralph
Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…
Application of Local Discretization Methods in the NASA Finite-Volume General Circulation Model
NASA Technical Reports Server (NTRS)
Yeh, Kao-San; Lin, Shian-Jiann; Rood, Richard B.
2002-01-01
We present the basic ideas of the dynamics system of the finite-volume General Circulation Model developed at NASA Goddard Space Flight Center for climate simulations and other applications in meteorology. The dynamics of this model is designed with emphases on conservative and monotonic transport, where the property of Lagrangian conservation is used to maintain the physical consistency of the computational fluid for long-term simulations. As the model benefits from the noise-free solutions of monotonic finite-volume transport schemes, the property of Lagrangian conservation also partly compensates the accuracy of transport for the diffusion effects due to the treatment of monotonicity. By faithfully maintaining the fundamental laws of physics during the computation, this model is able to achieve sufficient accuracy for the global consistency of climate processes. Because the computing algorithms are based on local memory, this model has the advantage of efficiency in parallel computation with distributed memory. Further research is yet desirable to reduce the diffusion effects of monotonic transport for better accuracy, and to mitigate the limitation due to fast-moving gravity waves for better efficiency.
Nosik, Melissa R; Williams, W Larry; Garrido, Natalia; Lee, Sarah
2013-01-01
In the current study, behavior skills training (BST) is compared to a computer based training package for teaching discrete trial instruction to staff, teaching an adult with autism. The computer based training package consisted of instructions, video modeling and feedback. BST consisted of instructions, modeling, rehearsal and feedback. Following training, participants were evaluated in terms of their accuracy on completing critical skills for running a discrete trial program. Six participants completed training; three received behavior skills training and three received the computer based training. Participants in the BST group performed better overall after training and during six week probes than those in the computer based training group. There were differences across both groups between research assistant and natural environment competency levels. Copyright © 2012 Elsevier Ltd. All rights reserved.
Computational Models of Relational Processes in Cognitive Development
ERIC Educational Resources Information Center
Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven
2012-01-01
Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…
Computational Toxicology at the US EPA | Science Inventory ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in America’s air, water, and hazardous-waste sites. The ORD Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the EPA Science to Achieve Results (STAR) program. Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast™) and exposure (ExpoCast™), and creating virtual liver (v-Liver™) and virtual embryo (v-Embryo™) systems models. The models and underlying data are being made publicly available t
ERIC Educational Resources Information Center
Berkant, Hasan Güner
2016-01-01
This study investigates faculty of education students' computer self-efficacy beliefs and their attitudes towards computers and implementing computer supported education. This study is descriptive and based on a correlational survey model. The final sample consisted of 414 students studying in the faculty of education of a Turkish university. The…
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
Computers in Communications and Education at Coast Community College District.
ERIC Educational Resources Information Center
Luskin, Bernard J.; Ruth, Monty W.
Coast Community College District in Orange County, California is a leader among community colleges in the instructional use computers. The district's hardware consists of an IBM system 370 model 155 computer, over 80 typewriter terminals, 12 cathode ray tubes (CRT), and several microfiche image projection devices. Better than 700 computer-assisted…
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
NASA Astrophysics Data System (ADS)
Denis, C.; Ibrahim, A.
Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less
Iavarone, Salvatore; Smith, Sean T.; Smith, Philip J.; ...
2017-06-03
Oxy-coal combustion is an emerging low-cost “clean coal” technology for emissions reduction and Carbon Capture and Sequestration (CCS). The use of Computational Fluid Dynamics (CFD) tools is crucial for the development of cost-effective oxy-fuel technologies and the minimization of environmental concerns at industrial scale. The coupling of detailed chemistry models and CFD simulations is still challenging, especially for large-scale plants, because of the high computational efforts required. The development of scale-bridging models is therefore necessary, to find a good compromise between computational efforts and the physical-chemical modeling precision. This paper presents a procedure for scale-bridging modeling of coal devolatilization, inmore » the presence of experimental error, that puts emphasis on the thermodynamic aspect of devolatilization, namely the final volatile yield of coal, rather than kinetics. The procedure consists of an engineering approach based on dataset consistency and Bayesian methodology including Gaussian-Process Regression (GPR). Experimental data from devolatilization tests carried out in an oxy-coal entrained flow reactor were considered and CFD simulations of the reactor were performed. Jointly evaluating experiments and simulations, a novel yield model was validated against the data via consistency analysis. In parallel, a Gaussian-Process Regression was performed, to improve the understanding of the uncertainty associated to the devolatilization, based on the experimental measurements. Potential model forms that could predict yield during devolatilization were obtained. The set of model forms obtained via GPR includes the yield model that was proven to be consistent with the data. Finally, the overall procedure has resulted in a novel yield model for coal devolatilization and in a valuable evaluation of uncertainty in the data, in the model form, and in the model parameters.« less
Efficient calibration for imperfect computer models
Tuo, Rui; Wu, C. F. Jeff
2015-12-01
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David; Agarwal, Deborah A.; Sun, Xin
2011-09-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Agarwal, D.; Sun, X.
2011-01-01
The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.
Instructional Variables in Meaningful Learning of Computer Programming.
ERIC Educational Resources Information Center
Mayer, Richard E.
Some 120 undergraduate students participated in experiments to learn how novice computer programers learn to interact with the computer. Two instructional booklets were used: A "rule" booklet consisted of definitions and examples of seven modified FORTRAN statements and appropriate grammar rules; the "model" booklet was…
Development and Verification of Body Armor Target Geometry Created Using Computed Tomography Scans
2017-07-13
designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of...modeling consisted of manual measurement of armor systems and translating those measurements to computer-aided design geometry, which can be tedious and...computer-aided design (CAD) human geometry model (referred to throughout as ORCA man) that is used in the Operational Requirement-based Casualty Assessment
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.; Mosher, R. A.; Bier, M.; Saville, D. A.
1981-01-01
A mathematical model of isoelectric focusing at the steady state has been developed for an M-component system of electrochemically defined ampholytes. The model is formulated from fundamental principles describing the components' chemical equilibria, mass transfer resulting from diffusion and electromigration, and electroneutrality. The model consists of ordinary differential equations coupled with a system of algebraic equations. The model is implemented on a digital computer using FORTRAN-based simulation software. Computer simulation data are presented for several two-component systems showing the effects of varying the isoelectric points and dissociation constants of the constituents.
The role of lithospheric stress in the support of the Tharsis rise
NASA Technical Reports Server (NTRS)
Willemann, R. J.; Turcotte, D. L.
1982-01-01
It is hypothesized that the Tharsis rise can be approximated as an axisymmetrical igneous construct. Linear theory for the deflection of planetary lithospheres is used to demonstrate that the lithospheric stresses required partially to support the construct are reasonable and consistent with the observed radial grabens around Tharsis. The computed thickness of the elastic lithosphere is between 110 and 260 km, depending of the values assumed for crustal thickness and crustal density. The computed thickness of the Tharsis load ranges from 40 to 70 km. Since in this model the height of the geoid is not specified a priori, the agreement between the observed and computed geoid is evidence for the validity of the model. The tectonics of the Tharsis region are briefly reviewed, and it is contended that all observations are consistent with the loading model.
Saliency image of feature building for image quality assessment
NASA Astrophysics Data System (ADS)
Ju, Xinuo; Sun, Jiyin; Wang, Peng
2011-11-01
The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.
NASA Technical Reports Server (NTRS)
Adams, Neil S.; Bollenbacher, Gary
1992-01-01
This report discusses the development and underlying mathematics of a rigid-body computer model of a proposed cryogenic on-orbit liquid depot storage, acquisition, and transfer spacecraft (COLD-SAT). This model, referred to in this report as the COLD-SAT dynamic model, consists of both a trajectory model and an attitudinal model. All disturbance forces and torques expected to be significant for the actual COLD-SAT spacecraft are modeled to the required degree of accuracy. Control and experimental thrusters are modeled, as well as fluid slosh. The model also computes microgravity disturbance accelerations at any specified point in the spacecraft. The model was developed by using the Boeing EASY5 dynamic analysis package and will run on Apollo, Cray, and other computing platforms.
2015-06-24
physically . While not distinct from IH models, they require inner boundary magnetic field and plasma property values, the latter not currently measured...initialization for the computational grid. Model integration continues until a physically consistent steady-state is attained. Because of the more... physical basis and greater likelihood of realistic solutions, only MHD-type coronal models were considered in the review. There are two major types of
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Introducing Managers to Expert Systems.
ERIC Educational Resources Information Center
Finlay, Paul N.; And Others
1991-01-01
Describes a short course to expose managers to expert systems, consisting of (1) introductory lecture; (2) supervised computer tutorial; (3) lecture and discussion about knowledge structuring and modeling; and (4) small group work on a case study using computers. (SK)
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A Model Description Document for the Emulation Simulation Computer Model was already published. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation simulation combination in the design, development, and test of a piece of ARS hardware, SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from test. Also, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system. The second consists of a potential air revitalization system.
Self-consistent continuum solvation for optical absorption of complex molecular systems in solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timrov, Iurii; Biancardi, Alessandro; Andreussi, Oliviero
2015-01-21
We introduce a new method to compute the optical absorption spectra of complex molecular systems in solution, based on the Liouville approach to time-dependent density-functional perturbation theory and the revised self-consistent continuum solvation model. The former allows one to obtain the absorption spectrum over a whole wide frequency range, using a recently proposed Lanczos-based technique, or selected excitation energies, using the Casida equation, without having to ever compute any unoccupied molecular orbitals. The latter is conceptually similar to the polarizable continuum model and offers the further advantages of allowing an easy computation of atomic forces via the Hellmann-Feynman theorem andmore » a ready implementation in periodic-boundary conditions. The new method has been implemented using pseudopotentials and plane-wave basis sets, benchmarked against polarizable continuum model calculations on 4-aminophthalimide, alizarin, and cyanin and made available through the QUANTUM ESPRESSO distribution of open-source codes.« less
Numerical Modeling of Nonlinear Thermodynamics in SMA Wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, D R; Kloucek, P
We present a mathematical model describing the thermodynamic behavior of shape memory alloy wires, as well as a computational technique to solve the resulting system of partial differential equations. The model consists of conservation equations based on a new Helmholtz free energy potential. The computational technique introduces a viscosity-based continuation method, which allows the model to handle dynamic applications where the temporally local behavior of solutions is desired. Computational experiments document that this combination of modeling and solution techniques appropriately predicts the thermally- and stress-induced martensitic phase transitions, as well as the hysteretic behavior and production of latent heat associatedmore » with such materials.« less
The relative effectiveness of computer-based and traditional resources for education in anatomy.
Khot, Zaid; Quinlan, Kaitlyn; Norman, Geoffrey R; Wainman, Bruce
2013-01-01
There is increasing use of computer-based resources to teach anatomy, although no study has compared computer-based learning to traditional. In this study, we examine the effectiveness of three formats of anatomy learning: (1) a virtual reality (VR) computer-based module, (2) a static computer-based module providing Key Views (KV), (3) a plastic model. We conducted a controlled trial in which 60 undergraduate students had ten minutes to study the names of 20 different pelvic structures. The outcome measure was a 25 item short answer test consisting of 15 nominal and 10 functional questions, based on a cadaveric pelvis. All subjects also took a brief mental rotations test (MRT) as a measure of spatial ability, used as a covariate in the analysis. Data were analyzed with repeated measures ANOVA. The group learning from the model performed significantly better than the other two groups on the nominal questions (Model 67%; KV 40%; VR 41%, Effect size 1.19 and 1.29, respectively). There was no difference between the KV and VR groups. There was no difference between the groups on the functional questions (Model 28%; KV, 23%, VR 25%). Computer-based learning resources appear to have significant disadvantages compared to traditional specimens in learning nominal anatomy. Consistent with previous research, virtual reality shows no advantage over static presentation of key views. © 2013 American Association of Anatomists.
Optimal and Nonoptimal Computer-Based Test Designs for Making Pass-Fail Decisions
ERIC Educational Resources Information Center
Hambleton, Ronald K.; Xing, Dehui
2006-01-01
Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the…
Sculpting Computational-Level Models.
Blokpoel, Mark
2017-06-27
In this commentary, I advocate for strict relations between Marr's levels of analysis. Under a strict relationship, each level is exactly implemented by the subordinate level. This yields two benefits. First, it brings consistency for multilevel explanations. Second, similar to how a sculptor chisels away superfluous marble, a modeler can chisel a computational-level model by applying constraints. By sculpting the model, one restricts the (potentially infinitely large) set of possible algorithmic- and implementational-level theories. Copyright © 2017 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverman, T. J.; Bosco, N.; Kurtz, S.
2012-03-01
Concentrating photovoltaic (CPV) cell assemblies can fail due to thermomechanical fatigue in the die-attach layer. In this presentation, we show the latest results from our computational model of thermomechanical fatigue. The model is used to estimate the relative lifetime of cell assemblies exposed to various temperature histories consistent with service and with accelerated testing. We also present early results from thermal cycling experiments designed to help validate the computational model.
Computational models of airway branching morphogenesis.
Varner, Victor D; Nelson, Celeste M
2017-07-01
The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Design and Training of Limited-Interconnect Architectures
1991-07-16
and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for
Consistent data-driven computational mechanics
NASA Astrophysics Data System (ADS)
González, D.; Chinesta, F.; Cueto, E.
2018-05-01
We present a novel method, within the realm of data-driven computational mechanics, to obtain reliable and thermodynamically sound simulation from experimental data. We thus avoid the need to fit any phenomenological model in the construction of the simulation model. This kind of techniques opens unprecedented possibilities in the framework of data-driven application systems and, particularly, in the paradigm of industry 4.0.
NASA Astrophysics Data System (ADS)
Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.
2018-03-01
The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.
Commercial Digital/ADP Equipment in the Ocean Environment. Volume 2. User Appendices
1978-12-15
is that the LINDA system uses a mini computer with a time sharing system software which allows several terminals to be operated at the same time...Acquisition System (ODAS) consists of sensors, computer hardware and computer software . Certain sensors are interfaced to the computers for real time...on USNS KANE, USNS BENT, and USKS WILKES. Commercial automatic data processing equipment used in ODAS includes: Item Model Computer PDP-9 Tape
Kang, Kyoung-Tak; Kim, Sung-Hwan; Son, Juhyun; Lee, Young Han; Koh, Yong-Gon
2017-01-01
Computational models have been identified as efficient techniques in the clinical decision-making process. However, computational model was validated using published data in most previous studies, and the kinematic validation of such models still remains a challenge. Recently, studies using medical imaging have provided a more accurate visualization of knee joint kinematics. The purpose of the present study was to perform kinematic validation for the subject-specific computational knee joint model by comparison with subject's medical imaging under identical laxity condition. The laxity test was applied to the anterior-posterior drawer under 90° flexion and the varus-valgus under 20° flexion with a series of stress radiographs, a Telos device, and computed tomography. The loading condition in the computational subject-specific knee joint model was identical to the laxity test condition in the medical image. Our computational model showed knee laxity kinematic trends that were consistent with the computed tomography images, except for negligible differences because of the indirect application of the subject's in vivo material properties. Medical imaging based on computed tomography with the laxity test allowed us to measure not only the precise translation but also the rotation of the knee joint. This methodology will be beneficial in the validation of laxity tests for subject- or patient-specific computational models.
Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh
1998-01-01
In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.
A Computer Based Educational Aid for the Instruction of Combat Modeling
1992-02-27
representation (36:363-370), and, as Knuth put it, "An algorithm must be seen to be believed" (23:4). Graphics not only aid in achieving instructional...consisted primarily of research, identification and use of existing combat model computer algorithms , interviews, and use of operation research...to-air combat models’ operating manuals provided valuable insight into pro- gram structure and algorithms used to represent the combat. From these
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2009-01-01
In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
A Logical Basis In The Layered Computer Vision Systems Model
NASA Astrophysics Data System (ADS)
Tejwani, Y. J.
1986-03-01
In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.
Model-based predictions for dopamine.
Langdon, Angela J; Sharpe, Melissa J; Schoenbaum, Geoffrey; Niv, Yael
2018-04-01
Phasic dopamine responses are thought to encode a prediction-error signal consistent with model-free reinforcement learning theories. However, a number of recent findings highlight the influence of model-based computations on dopamine responses, and suggest that dopamine prediction errors reflect more dimensions of an expected outcome than scalar reward value. Here, we review a selection of these recent results and discuss the implications and complications of model-based predictions for computational theories of dopamine and learning. Copyright © 2017. Published by Elsevier Ltd.
A model for the control mode man-computer interface dialogue
NASA Technical Reports Server (NTRS)
Chafin, R. L.
1981-01-01
A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.
Performance Prediction Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen
The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less
Computational Toxicology as Implemented by the US EPA ...
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the T
Detection and quantification of flow consistency in business process models.
Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel; Soffer, Pnina; Weber, Barbara
2018-01-01
Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Automatic prediction of tongue muscle activations using a finite element model.
Stavness, Ian; Lloyd, John E; Fels, Sidney
2012-11-15
Computational modeling has improved our understanding of how muscle forces are coordinated to generate movement in musculoskeletal systems. Muscular-hydrostat systems, such as the human tongue, involve very different biomechanics than musculoskeletal systems, and modeling efforts to date have been limited by the high computational complexity of representing continuum-mechanics. In this study, we developed a computationally efficient tracking-based algorithm for prediction of muscle activations during dynamic 3D finite element simulations. The formulation uses a local quadratic-programming problem at each simulation time-step to find a set of muscle activations that generated target deformations and movements in finite element muscular-hydrostat models. We applied the technique to a 3D finite element tongue model for protrusive and bending movements. Predicted muscle activations were consistent with experimental recordings of tongue strain and electromyography. Upward tongue bending was achieved by recruitment of the superior longitudinal sheath muscle, which is consistent with muscular-hydrostat theory. Lateral tongue bending, however, required recruitment of contralateral transverse and vertical muscles in addition to the ipsilateral margins of the superior longitudinal muscle, which is a new proposition for tongue muscle coordination. Our simulation framework provides a new computational tool for systematic analysis of muscle forces in continuum-mechanics models that is complementary to experimental data and shows promise for eliciting a deeper understanding of human tongue function. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kral, Q.; Thebault, P.; Charnoz, S.
2014-01-01
The first attempt at developing a fully self-consistent code coupling dynamics and collisions to study debris discs (Kral et al. 2013) is presented. So far, these two crucial mechanisms were studied separately, with N-body and statistical collisional codes respectively, because of stringent computational constraints. We present a new model named LIDT-DD which is able to follow over long timescales the coupled evolution of dynamics (including radiation forces) and collisions in a self-consistent way.
NASA Technical Reports Server (NTRS)
Nguyen, H. L.; Ying, S.-J.
1990-01-01
Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.
NASA Astrophysics Data System (ADS)
Adhikari, Surendra; Ivins, Erik R.; Larour, Eric
2016-03-01
A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.
2010-01-01
Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.
NASA Technical Reports Server (NTRS)
Longendorfer, B. A.
1976-01-01
The construction of an autonomous roving vehicle requires the development of complex data-acquisition and processing systems, which determine the path along which the vehicle travels. Thus, a vehicle must possess algorithms which can (1) reliably detect obstacles by processing sensor data, (2) maintain a constantly updated model of its surroundings, and (3) direct its immediate actions to further a long range plan. The first function consisted of obstacle recognition. Obstacles may be identified by the use of edge detection techniques. Therefore, the Kalman Filter was implemented as part of a large scale computer simulation of the Mars Rover. The second function consisted of modeling the environment. The obstacle must be reconstructed from its edges, and the vast amount of data must be organized in a readily retrievable form. Therefore, a Terrain Modeller was developed which assembled and maintained a rectangular grid map of the planet. The third function consisted of directing the vehicle's actions.
NASA Technical Reports Server (NTRS)
Narayan, K. Lakshmi; Kelton, K. F.; Ray, C. S.
1996-01-01
Heterogeneous nucleation and its effects on the crystallization of lithium disilicate glass containing small amounts of Pt are investigated. Measurements of the nucleation frequencies and induction times with and without Pt are shown to be consistent with predictions based on the classical nucleation theory. A realistic computer model for the transformation is presented. Computed differential thermal analysis data (such as crystallization rates as a function of time and temperature) are shown to be in good agreement with experimental results. This modeling provides a new, more quantitative method for analyzing calorimetric data.
Color reproduction system based on color appearance model and gamut mapping
NASA Astrophysics Data System (ADS)
Cheng, Fang-Hsuan; Yang, Chih-Yuan
2000-06-01
By the progress of computer, computer peripherals such as color monitor and printer are often used to generate color image. However, cross media color reproduction by human perception is usually different. Basically, the influence factors are device calibration and characterization, viewing condition, device gamut and human psychology. In this thesis, a color reproduction system based on color appearance model and gamut mapping is proposed. It consists of four parts; device characterization, color management technique, color appearance model and gamut mapping.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Wu, C. F. Jeff
Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Modeling and optimum time performance for concurrent processing
NASA Technical Reports Server (NTRS)
Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy
1988-01-01
The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.; Lumley, J. L.; Abid, R.
1994-01-01
A nonlinear representation for the rapid-pressure correlation appearing in the Reynolds stress equations, consistent with the Taylor-Proudman theorem, is presented. The representation insures that the modeled second-order equations are frame-invariant with respect to rotation when the flow is two-dimensional in planes perpendicular to the axis of rotation. The representation satisfies realizability in a new way: a special ansatz is used to obtain analytically, the values of coefficients valid away from the realizability limit: the model coefficients are functions of the state of the turbulence that are valid for all states of the mechanical turbulence attaining their constant limiting values only when the limit state is achieved. Utilization of all the mathematical constraints are not enough to specify all the coefficients in the model. The unspecified coefficients appear as free parameters which are used to insure that the representation is asymptotically consistent with the known equilibrium states of a homogeneous sheared turbulence. This is done by insuring that the modeled evolution equations have the same fixed points as those obtained from computer and laboratory experiments for the homogeneous shear. Results of computations of the homogeneous shear, with and without rotation, and with stabilizing and destabilizing curvature, are shown. Results are consistently better, in a wide class of flows which the model not been calibrated, than those obtained with other nonlinear models.
An assessment and application of turbulence models for hypersonic flows
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Viegas, J. R.; Huang, P. G.; Rubesin, M. W.
1990-01-01
The current approach to the Accurate Computation of Complex high-speed flows is to solve the Reynolds averaged Navier-Stokes equations using finite difference methods. An integral part of this approach consists of development and applications of mathematical turbulence models which are necessary in predicting the aerothermodynamic loads on the vehicle and the performance of the propulsion plant. Computations of several high speed turbulent flows using various turbulence models are described and the models are evaluated by comparing computations with the results of experimental measurements. The cases investigated include flows over insulated and cooled flat plates with Mach numbers ranging from 2 to 8 and wall temperature ratios ranging from 0.2 to 1.0. The turbulence models investigated include zero-equation, two-equation, and Reynolds-stress transport models.
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
MODELS-3 (CMAQ). NARSTO NEWS (VOL. 3, NO. 2, SUMMER/FALL 1999)
A revised version of the U.S. EPA's Models-3/CMAQ system was released on June 30, 1999. Models-3 consists of a sophisticated computational framework for environmental models allowing for much flexibility in the communications between component parts of the system, in updating or ...
A Framework for the Specification of the Semantics and the Dynamics of Instructional Applications
ERIC Educational Resources Information Center
Buendia-Garcia, Felix; Diaz, Paloma
2003-01-01
An instructional application consists of a set of resources and activities to implement interacting, interrelated, and structured experiences oriented towards achieving specific educational objectives. The development of computer-based instructional applications has to follow a well defined process, so models for computer-based instructional…
DOT National Transportation Integrated Search
1976-07-01
Several new capabilities have been added to the DYNALIST II computer program. These include: (1) a component matrix generator that operates as a 3-D finite element modeling program where elements consist of rigid bodies, flexural bodies, wheelsets, s...
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Space-Shuttle Emulator Software
NASA Technical Reports Server (NTRS)
Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram;
2007-01-01
A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
Consistency of color representation in smart phones.
Dain, Stephen J; Kwan, Benjamin; Wong, Leslie
2016-03-01
One of the barriers to the construction of consistent computer-based color vision tests has been the variety of monitors and computers. Consistency of color on a variety of screens has necessitated calibration of each setup individually. Color vision examination with a carefully controlled display has, as a consequence, been a laboratory rather than a clinical activity. Inevitably, smart phones have become a vehicle for color vision tests. They have the advantage that the processor and screen are associated and there are fewer models of smart phones than permutations of computers and monitors. Colorimetric consistency of display within a model may be a given. It may extend across models from the same manufacturer but is unlikely to extend between manufacturers especially where technologies vary. In this study, we measured the same set of colors in a JPEG file displayed on 11 samples of each of four models of smart phone (iPhone 4s, iPhone5, Samsung Galaxy S3, and Samsung Galaxy S4) using a Photo Research PR-730. The iPhones are white LED backlit LCD and the Samsung are OLEDs. The color gamut varies between models and comparison with sRGB space shows 61%, 85%, 117%, and 110%, respectively. The iPhones differ markedly from the Samsungs and from one another. This indicates that model-specific color lookup tables will be needed. Within each model, the primaries were quite consistent (despite the age of phone varying within each sample). The worst case in each model was the blue primary; the 95th percentile limits in the v' coordinate were ±0.008 for the iPhone 4 and ±0.004 for the other three models. The u'v' variation in white points was ±0.004 for the iPhone4 and ±0.002 for the others, although the spread of white points between models was u'v'±0.007. The differences are essentially the same for primaries at low luminance. The variation of colors intermediate between the primaries (e.g., red-purple, orange) mirror the variation in the primaries. The variation in luminance (maximum brightness) was ±7%, 15%, 7%, and 15%, respectively. The iPhones have almost 2× the luminance. To accommodate differences between makes and models, dedicated color lookup tables will be necessary, but the variations within a model appear to be small enough that consistent color vision tests can be designed successfully.
PHYSICAL COAL-CLEANING/FLUE GAS DESULFURIZATION COMPUTER MODEL
The model consists of four programs: (1) one, initially developed by Battell-Columbus Laboratories, obtained from Versar, Inc.; (2) one developed by TVA; and (3,4) two developed by TVA and Bechtel National, Inc. The model produces design performance criteria and estimates of capi...
Computer simulation of space station computer steered high gain antenna
NASA Technical Reports Server (NTRS)
Beach, S. W.
1973-01-01
The mathematical modeling and programming of a complete simulation program for a space station computer-steered high gain antenna are described. The program provides for reading input data cards, numerically integrating up to 50 first order differential equations, and monitoring up to 48 variables on printed output and on plots. The program system consists of a high gain antenna, an antenna gimbal control system, an on board computer, and the environment in which all are to operate.
Computer Programs For Automated Welding System
NASA Technical Reports Server (NTRS)
Agapakis, John E.
1993-01-01
Computer programs developed for use in controlling automated welding system described in MFS-28578. Together with control computer, computer input and output devices and control sensors and actuators, provide flexible capability for planning and implementation of schemes for automated welding of specific workpieces. Developed according to macro- and task-level programming schemes, which increases productivity and consistency by reducing amount of "teaching" of system by technician. System provides for three-dimensional mathematical modeling of workpieces, work cells, robots, and positioners.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
NASA Astrophysics Data System (ADS)
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Over the years, computer modeling has been used extensively in many disciplines to solve engineering problems. A set of computer program tools is proposed to assist the engineer in the various phases of the Space Station program from technology selection through flight operations. The development and application of emulation and simulation transient performance modeling tools for life support systems are examined. The results of the development and the demonstration of the utility of three computer models are presented. The first model is a detailed computer model (emulation) of a solid amine water desorbed (SAWD) CO2 removal subsystem combined with much less detailed models (simulations) of a cabin, crew, and heat exchangers. This model was used in parallel with the hardware design and test of this CO2 removal subsystem. The second model is a simulation of an air revitalization system combined with a wastewater processing system to demonstrate the capabilities to study subsystem integration. The third model is that of a Space Station total air revitalization system. The station configuration consists of a habitat module, a lab module, two crews, and four connecting nodes.
The Mechanics of Embodiment: A Dialog on Embodiment and Computational Modeling
Pezzulo, Giovanni; Barsalou, Lawrence W.; Cangelosi, Angelo; Fischer, Martin H.; McRae, Ken; Spivey, Michael J.
2011-01-01
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamoring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensorimotor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialog between two fictional characters: Ernest, the “experimenter,” and Mary, the “computational modeler.” The dialog consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modeling. PMID:21713184
Modelling the penumbra in Computed Tomography1
Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.
2016-01-01
BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198
Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J
2008-02-01
Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial "break in" period of the simulation.
System capacity and economic modeling computer tool for satellite mobile communications systems
NASA Technical Reports Server (NTRS)
Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.
1988-01-01
A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.
PROM7: 1D modeler of solar filaments or prominences
NASA Astrophysics Data System (ADS)
Gouttebroze, P.
2018-05-01
PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).
Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray
2016-01-01
To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.
On the mass and thermodynamics of the Higgs boson
NASA Astrophysics Data System (ADS)
Fokas, A. S.; Vayenas, C. G.; Grigoriou, D. P.
2018-02-01
In two recent works we have shown that the masses of the W± and Zo bosons can be computed from first principles by modeling these bosons as bound relativistic gravitationally confined rotational states consisting of e±-νe pairs in the case of W± bosons and of a e+-νe-e- triplet in the case of the Zo boson. Here, we present similar calculations for the Higgs boson which we model as a bound rotational state consisting of a positron, an electron, a neutrino and an antineutrino. The model contains no adjustable parameters and the computed boson mass of 125.7 GeV/c2, is in very good agreement with the experimental value of 125.1 ± 1 GeV/c2. The thermodynamics and potential connection of this particle with the Higgs field are also briefly addressed.
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Geil, Mark D
2007-01-01
Computer-aided design (CAD) and computer-aided manufacturing systems have been adapted for specific use in prosthetics, providing practitioners with a means to digitally capture the shape of a patient's limb, modify the socket model using software, and automatically manufacture either a positive model to be used in the fabrication of a socket or the socket itself. The digital shape captured is a three-dimensional (3-D) model from which standard anthropometric measures can be easily obtained. This study recorded six common anthropometric dimensions from CAD shape files of three foam positive models of the residual limbs of persons with transtibial amputations. Two systems were used to obtain 3-D models of the residual limb, a noncontact optical system and a contact-based electromagnetic field system, and both experienced practitioners and prosthetics students conducted measurements. Measurements were consistent; the mean range (difference of maximum and minimum) across all measurements was 0.96 cm. Both systems provided similar results, and both groups used the systems consistently. Students were slightly more consistent than practitioners but not to a clinically significant degree. Results also compared favorably with traditional measurement, with differences versus hand measurements about 5 mm. These results suggest the routine use of digital shape capture for collection of patient volume information.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Walters, Kevin
2012-08-07
In this paper we use approximate Bayesian computation to estimate the parameters in an immortal model of colonic stem cell division. We base the inferences on the observed DNA methylation patterns of cells sampled from the human colon. Utilising DNA methylation patterns as a form of molecular clock is an emerging area of research and has been used in several studies investigating colonic stem cell turnover. There is much debate concerning the two competing models of stem cell turnover: the symmetric (immortal) and asymmetric models. Early simulation studies concluded that the observed methylation data were not consistent with the immortal model. A later modified version of the immortal model that included preferential strand segregation was subsequently shown to be consistent with the same methylation data. Most of this earlier work assumes site independent methylation models that do not take account of the known processivity of methyltransferases whilst other work does not take into account the methylation errors that occur in differentiated cells. This paper addresses both of these issues for the immortal model and demonstrates that approximate Bayesian computation provides accurate estimates of the parameters in this neighbour-dependent model of methylation error rates. The results indicate that if colonic stem cells divide asymmetrically then colon stem cell niches are maintained by more than 8 stem cells. Results also indicate the possibility of preferential strand segregation and provide clear evidence against a site-independent model for methylation errors. In addition, algebraic expressions for some of the summary statistics used in the approximate Bayesian computation (that allow for the additional variation arising from cell division in differentiated cells) are derived and their utility discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling and analysis of the DSS-14 antenna control system
NASA Technical Reports Server (NTRS)
Gawronski, W.; Bartos, R.
1996-01-01
An improvement of pointing precision of the DSS-14 antenna is planned for the near future. In order to analyze the improvement limits and to design new controllers, a precise model of the antenna and the servo is developed, including a finite element model of the antenna structure and detailed models of the hydraulic drives and electronic parts. The DSS-14 antenna control system has two modes of operation: computer mode and precision mode. The principal goal of this investigation is to develop the model of the computer mode and to evaluate its performance. The DSS-14 antenna computer model consists of the antenna structure and drives in azimuth and elevation. For this model, the position servo loop is derived, and simulations of the closed-loop antenna dynamics are presented. The model is significantly different from that for the 34-m beam-waveguide antennas.
NASA Astrophysics Data System (ADS)
Pécoul, S.; Heuraux, S.; Koch, R.; Leclert, G.; Bécoulet, A.; Colas, L.
1999-09-01
Self-consistent calculations of the 3D electric field patterns between the screen and the plasma have been made with the ICANT code for realistic antennas. Here we explain how the ICRH antennas of the Tore Supra tokamak are modelled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecoul, S.; Heuraux, S.; Koch, R.
1999-09-20
Self-consistent calculations of the 3D electric field patterns between the screen and the plasma have been made with the ICANT code for realistic antennas. Here we explain how the ICRH antennas of the Tore Supra tokamak are modelled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiveland, W.A.; Oberjohn, W.J.; Cornelius, D.K.
1985-12-01
This report summarizes the work conducted during a 30-month contract with the United States Department of Energy (DOE) Pittsburgh Energy Technology Center (PETC). The general objective is to develop and verify a computer code capable of modeling the major aspects of pulverized coal combustion. Achieving this objective will lead to design methods applicable to industrial and utility furnaces. The combustion model (COMO) is based mainly on an existing Babcock and Wilcox (B and W) computer program. The model consists of a number of relatively independent modules that represent the major processes involved in pulverized coal combustion: flow, heterogeneous and homogeneousmore » chemical reaction, and heat transfer. As models are improved or as new ones are developed, this modular structure allows portions of the COMO model to be updated with minimal impact on the remainder of the program. The report consists of two volumes. This volume (Volume 1) contains a technical summary of the COMO model, results of predictions for gas phase combustion, pulverized coal combustion, and a detailed description of the COMO model. Volume 2 is the Users Guide for COMO and contains detailed instructions for preparing the input data and a description of the program output. Several example cases have been included to aid the user in usage of the computer program for pulverized coal applications. 66 refs., 41 figs., 21 tabs.« less
I-deas TMG to NX Space Systems Thermal Model Conversion and Computational Performance Comparison
NASA Technical Reports Server (NTRS)
Somawardhana, Ruwan
2011-01-01
CAD/CAE packages change on a continuous basis as the power of the tools increase to meet demands. End -users must adapt to new products as they come to market and replace legacy packages. CAE modeling has continued to evolve and is constantly becoming more detailed and complex. Though this comes at the cost of increased computing requirements Parallel processing coupled with appropriate hardware can minimize computation time. Users of Maya Thermal Model Generator (TMG) are faced with transitioning from NX I -deas to NX Space Systems Thermal (SST). It is important to understand what differences there are when changing software packages We are looking for consistency in results.
Mathematical form models of tree trunks
Rudolfs Ozolins
2000-01-01
Assortment structure analysis of tree trunks is a characteristic and proper problem that can be solved by using mathematical modeling and standard computer programs. Mathematical form model of tree trunks consists of tapering curve equations and their parameters. Parameters for nine species were obtained by processing measurements of 2,794 model trees and studying the...
An u-Service Model Based on a Smart Phone for Urban Computing Environments
NASA Astrophysics Data System (ADS)
Cho, Yongyun; Yoe, Hyun
In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.
Semantic Clinical Guideline Documents
Eriksson, Henrik; Tu, Samson W.; Musen, Mark
2005-01-01
Decision-support systems based on clinical practice guidelines can support physicians and other health-care personnel in the process of following best practice consistently. A knowledge-based approach to represent guidelines makes it possible to encode computer-interpretable guidelines in a formal manner, perform consistency checks, and use the guidelines directly in decision-support systems. Decision-support authors and guideline users require guidelines in human-readable formats in addition to computer-interpretable ones (e.g., for guideline review and quality assurance). We propose a new document-oriented information architecture that combines knowledge-representation models with electronic and paper documents. The approach integrates decision-support modes with standard document formats to create a combined clinical-guideline model that supports on-line viewing, printing, and decision support. PMID:16779037
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
Effects of Educational Beliefs on Attitudes towards Using Computer Technologies
ERIC Educational Resources Information Center
Onen, Aysem Seda
2012-01-01
This study, aiming to determine the relationship between pre-service teachers' beliefs about education and their attitudes towards utilizing computers and internet, is a descriptive study in scanning model. The sampling of the study consisted of 270 pre-service teachers. The potential relationship between the beliefs of pre-service teachers about…
ERIC Educational Resources Information Center
Castillo, Alan F.
2014-01-01
The purpose of this quantitative correlational cross-sectional research study was to examine a theoretical model consisting of leadership practice, attitudes of business process outsourcing, and strategic intentions of leaders to use cloud computing and to examine the relationships between each of the variables respectively. This study…
Theoretical studies of solar lasers and converters
NASA Technical Reports Server (NTRS)
Heinbockel, John H.
1990-01-01
The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.
Diagnosing hypoxia in murine models of rheumatoid arthritis from reflectance multispectral images
NASA Astrophysics Data System (ADS)
Glinton, Sophie; Naylor, Amy J.; Claridge, Ela
2017-07-01
Spectra computed from multispectral images of murine models of Rheumatoid Arthritis show a characteristic decrease in reflectance within the 600-800nm region which is indicative of the reduction in blood oxygenation and is consistent with hypoxia.
Tandem internal models execute motor learning in the cerebellum.
Honda, Takeru; Nagao, Soichi; Hashimoto, Yuji; Ishikawa, Kinya; Yokota, Takanori; Mizusawa, Hidehiro; Ito, Masao
2018-06-25
In performing skillful movement, humans use predictions from internal models formed by repetition learning. However, the computational organization of internal models in the brain remains unknown. Here, we demonstrate that a computational architecture employing a tandem configuration of forward and inverse internal models enables efficient motor learning in the cerebellum. The model predicted learning adaptations observed in hand-reaching experiments in humans wearing a prism lens and explained the kinetic components of these behavioral adaptations. The tandem system also predicted a form of subliminal motor learning that was experimentally validated after training intentional misses of hand targets. Patients with cerebellar degeneration disease showed behavioral impairments consistent with tandemly arranged internal models. These findings validate computational tandemization of internal models in motor control and its potential uses in more complex forms of learning and cognition. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
NASA Astrophysics Data System (ADS)
Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael
2010-02-01
Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.
Kavlock, Robert; Dix, David
2010-02-01
Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environmental Protection Agency EPA is developing robust and flexible computational tools that can be applied to the thousands of chemicals in commerce, and contaminant mixtures found in air, water, and hazardous-waste sites. The Office of Research and Development (ORD) Computational Toxicology Research Program (CTRP) is composed of three main elements. The largest component is the National Center for Computational Toxicology (NCCT), which was established in 2005 to coordinate research on chemical screening and prioritization, informatics, and systems modeling. The second element consists of related activities in the National Health and Environmental Effects Research Laboratory (NHEERL) and the National Exposure Research Laboratory (NERL). The third and final component consists of academic centers working on various aspects of computational toxicology and funded by the U.S. EPA Science to Achieve Results (STAR) program. Together these elements form the key components in the implementation of both the initial strategy, A Framework for a Computational Toxicology Research Program (U.S. EPA, 2003), and the newly released The U.S. Environmental Protection Agency's Strategic Plan for Evaluating the Toxicity of Chemicals (U.S. EPA, 2009a). Key intramural projects of the CTRP include digitizing legacy toxicity testing information toxicity reference database (ToxRefDB), predicting toxicity (ToxCast) and exposure (ExpoCast), and creating virtual liver (v-Liver) and virtual embryo (v-Embryo) systems models. U.S. EPA-funded STAR centers are also providing bioinformatics, computational toxicology data and models, and developmental toxicity data and models. The models and underlying data are being made publicly available through the Aggregated Computational Toxicology Resource (ACToR), the Distributed Structure-Searchable Toxicity (DSSTox) Database Network, and other U.S. EPA websites. While initially focused on improving the hazard identification process, the CTRP is placing increasing emphasis on using high-throughput bioactivity profiling data in systems modeling to support quantitative risk assessments, and in developing complementary higher throughput exposure models. This integrated approach will enable analysis of life-stage susceptibility, and understanding of the exposures, pathways, and key events by which chemicals exert their toxicity in developing systems (e.g., endocrine-related pathways). The CTRP will be a critical component in next-generation risk assessments utilizing quantitative high-throughput data and providing a much higher capacity for assessing chemical toxicity than is currently available.
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Model Performance of Water-Current Meters
Fulford, J.M.; ,
2002-01-01
The measurement of discharge in natural streams requires hydrographers to use accurate water-current meters that have consistent performance among meters of the same model. This paper presents the results of an investigation into the performance of four models of current meters - Price type-AA, Price pygmy, Marsh McBirney 2000 and Swoffer 2100. Tests for consistency and accuracy for six meters of each model are summarized. Variation of meter performance within a model is used as an indicator of consistency, and percent velocity error that is computed from a measured reference velocity is used as an indicator of meter accuracy. Velocities measured by each meter are also compared to the manufacturer's published or advertised accuracy limits. For the meters tested, the Price models werer found to be more accurate and consistent over the range of test velocities compared to the other models. The Marsh McBirney model usually measured within its accuracy specification. The Swoffer meters did not meet the stringent Swoffer accuracy limits for all the velocities tested.
Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.
Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P
2015-10-01
Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.
Aeroelastic Calculations Using CFD for a Typical Business Jet Model
NASA Technical Reports Server (NTRS)
Gibbons, Michael D.
1996-01-01
Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
Noise optimization of a regenerative automotive fuel pump
NASA Astrophysics Data System (ADS)
Wang, J. F.; Feng, H. H.; Mou, X. L.; Huang, Y. X.
2017-03-01
The regenerative pump used in automotive is facing a noise problem. To understand the mechanism in detail, Computational Fluid Dynamics (CFD) and Computational Acoustic Analysis (CAA) together were used to understand the fluid and acoustic characteristics of the fuel pump using ANSYS-CFX 15.0 and LMS Virtual. Lab Rev12, respectively. The CFD model and acoustical model were validated by mass flow rate test and sound pressure test, respectively. Comparing the computational and experimental results shows that sound pressure levels at the observer position are consistent at high frequencies, especially at blade passing frequency. After validating the models, several numerical models were analyzed in the study for noise improvement. It is observed that for configuration having greater number of impeller blades, noise level was significantly improved at blade passing frequency, when compared to that of the original model.
DOT National Transportation Integrated Search
2008-09-01
FSUTMS training is a major activity of the Systems Planning Office of the Florida Department of : Transportation (FDOT). The training aims to establish and maintain quality assurance for consistent : statewide modeling standards and provide up-to-dat...
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
NASA Astrophysics Data System (ADS)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea
2017-10-01
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.
NASA Technical Reports Server (NTRS)
1990-01-01
Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.
An expanded system simulation model for solar energy storage (technical report), volume 1
NASA Technical Reports Server (NTRS)
Warren, A. W.
1979-01-01
The simulation model for wind energy storage (SIMWEST) program now includes wind and/or photovoltaic systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) and is available for the UNIVAC 1100 series and the CDC 6000 series computers. The level of detail is consistent with a role of evaluating the economic feasibility as well as the general performance of wind and/or photovoltaic energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind and/or photovoltaic source/storage/application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/0, the integration of system dynamics, and the iteration for conveyance of variables.
Simulation Computation of 430 Ferritic Stainless Steel Solidification
NASA Astrophysics Data System (ADS)
Pang, Ruipeng; Li, Changrong; Wang, Fuming; Hu, Lifu
The solidification structure of 430 ferritic stainless steel has been calculated in the solidification process by using 3D-CAFE model under the condition of water cooling. The calculated results consistent with those obtained from experiment. Under watercooling condition, the solidification structure consists of chilled layer, columnar grain zone, transition zone and equiaxed grain zone.
Vectorized Jiles-Atherton hysteresis model
NASA Astrophysics Data System (ADS)
Szymański, Grzegorz; Waszak, Michał
2004-01-01
This paper deals with vector hysteresis modeling. A vector model consisting of individual Jiles-Atherton components placed along principal axes is proposed. The cross-axis coupling ensures general vector model properties. Minor loops are obtained using scaling method. The model is intended for efficient finite element method computations defined in terms of magnetic vector potential. Numerical efficiency is ensured by differential susceptibility approach.
A Methodological Study of a Computer-Managed Instructional Program in High School Physics.
ERIC Educational Resources Information Center
Denton, Jon James
The purpose of this study was to develop and evaluate an instructional model which utilized the computer to produce individually prescribed instructional guides in physics at the secondary school level. The sample consisted of three classes. Of these, two were randomly selected to serve as the treatment groups, e.g., individualized instruction and…
SOCRATES, a Computer-Based Instructional System in Theory and Research. Technical Report.
ERIC Educational Resources Information Center
Stolurow, Lawrence M.
The paper describes a cybernetic computer-based instructional system, SOCRATES, the teaching model which led to its development, and some of the research accomplished with it. The acronym, SOCRATES, is System for Organizing Content to Review and Teach Educational Subject. It consists of a group of student input-output (I/O) stations wired to a…
Logistic regression of family data from retrospective study designs.
Whittemore, Alice S; Halpern, Jerry
2003-11-01
We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.
Nelson, D A; Curlee, J S; Curran, A R; Ziriax, J M; Mason, P A
2005-12-01
The localized thermal insulation value expresses a garment's thermal resistance over the region which is covered by the garment, rather than over the entire surface of a subject or manikin. The determination of localized garment insulation values is critical to the development of high-resolution models of sensible heat exchange. A method is presented for determining and validating localized garment insulation values, based on whole-body insulation values (clo units) and using computer-aided design and thermal analysis software. Localized insulation values are presented for a catalog consisting of 106 garments and verified using computer-generated models. The values presented are suitable for use on volume element-based or surface element-based models of heat transfer involving clothed subjects.
ZIMOD: A Simple Computer Model of the Zimbabwean Economy.
ERIC Educational Resources Information Center
Knox, Jon; And Others
1988-01-01
This paper describes a rationale for the construction and use of a simple consistency model of the Zimbabwean economy that incorporates an input-output matrix. The model is designed to investigate alternative industrial strategies and their consequences for the balance of payments, consumption, and overall gross domestic product growth for a…
NASA Technical Reports Server (NTRS)
Caruso, J. J.
1984-01-01
Finite element substructuring is used to predict unidirectional fiber composite hygral (moisture), thermal, and mechanical properties. COSMIC NASTRAN and MSC/NASTRAN are used to perform the finite element analysis. The results obtained from the finite element model are compared with those obtained from the simplified composite micromechanics equations. A unidirectional composite structure made of boron/HM-epoxy, S-glass/IMHS-epoxy and AS/IMHS-epoxy are studied. The finite element analysis is performed using three dimensional isoparametric brick elements and two distinct models. The first model consists of a single cell (one fiber surrounded by matrix) to form a square. The second model uses the single cell and substructuring to form a nine cell square array. To compare computer time and results with the nine cell superelement model, another nine cell model is constructed using conventional mesh generation techniques. An independent computer program consisting of the simplified micromechanics equation is developed to predict the hygral, thermal, and mechanical properties for this comparison. The results indicate that advanced techniques can be used advantageously for fiber composite micromechanics.
Computational Analysis of Human Blood Flow
NASA Astrophysics Data System (ADS)
Panta, Yogendra; Marie, Hazel; Harvey, Mark
2009-11-01
Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.
CFD Analysis of Emissions for a Candidate N+3 Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud
2015-01-01
An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spray-modeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.
Transfer of computer software technology through workshops: The case of fish bioenergetics modeling
Johnson, B.L.
1992-01-01
A three-part program is proposed to promote the availability and use of computer software packages to fishery managers and researchers. The approach consists of journal articles that announce new technologies, technical reports that serve as user's guides, and hands-on workshops that provide direct instruction to new users. Workshops, which allow experienced users to directly instruct novices in software operation and application are important, but often neglected. The author's experience with organizing and conducting bioenergetics modeling workshops suggests the optimal workshop would take 2 days, have 10-15 participants, one computer for every two users, and one instructor for every 5-6 people.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Nathan; Menikoff, Ralph
2017-02-03
Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less
Bigdata Driven Cloud Security: A Survey
NASA Astrophysics Data System (ADS)
Raja, K.; Hanifa, Sabibullah Mohamed
2017-08-01
Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
A user's Manual for the Emulation Simulation Computer Model was published previously. The model consisted of a detailed model (emulation) of a SAWD CO2 removal subsystem which operated with much less detailed (simulation) models of a cabin, crew, and condensing and sensible heat exchangers. The purpose was to explore the utility of such an emulation/simulation combination in the design, development, and test of a piece of ARS hardware - SAWD. Extensions to this original effort are presented. The first extension is an update of the model to reflect changes in the SAWD control logic which resulted from the test. In addition, slight changes were also made to the SAWD model to permit restarting and to improve the iteration technique. The second extension is the development of simulation models for more pieces of air and water processing equipment. Models are presented for: EDC, Molecular Sieve, Bosch, Sabatier, a new condensing heat exchanger, SPE, SFWES, Catalytic Oxidizer, and multifiltration. The third extension is to create two system simulations using these models. The first system presented consists of one air and one water processing system, the second a potential Space Station air revitalization system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
A hierarchical competing systems model of the emergence and early development of executive function
Marcovitch, Stuart; Zelazo, Philip David
2010-01-01
The hierarchical competing systems model (HCSM) provides a framework for understanding the emergence and early development of executive function – the cognitive processes underlying the conscious control of behavior – in the context of search for hidden objects. According to this model, behavior is determined by the joint influence of a developmentally invariant habit system and a conscious representational system that becomes increasingly influential as children develop. This article describes a computational formalization of the HCSM, reviews behavioral and computational research consistent with the model, and suggests directions for future research on the development of executive function. PMID:19120405
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control Over a Hump Model
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.
2006-01-01
The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.
Evaluation of Inelastic Constitutive Models for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1983-01-01
The influence of inelastic material models on computed stress-strain states, and therefore predicted lives, was studied for thermomechanically loaded structures. Nonlinear structural analyses were performed on a fatigue specimen which was subjected to thermal cycling in fluidized beds and on a mechanically load cycled benchmark notch specimen. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic-kinematic, combined plus transient creep) were exercised. Of the plasticity models, kinematic hardening gave results most consistent with experimental observations. Life predictions using the computed strain histories at the critical location with a Strainrange Partitioning approach considerably overpredicted the crack initiation life of the thermal fatigue specimen.
Analysis and synthesis of distributed-lumped-active networks by digital computer
NASA Technical Reports Server (NTRS)
1973-01-01
The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.
A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics
NASA Astrophysics Data System (ADS)
Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.
2017-12-01
The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as other rheologies.
Time-dependent jet flow and noise computations
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.; Karniadakis, G. E.; Orszag, S. A.
1990-01-01
Methods for computing jet turbulence noise based on the time-dependent solution of Lighthill's (1952) differential equation are demonstrated. A key element in this approach is a flow code for solving the time-dependent Navier-Stokes equations at relatively high Reynolds numbers. Jet flow results at Re = 10,000 are presented here. This code combines a computationally efficient spectral element technique and a new self-consistent turbulence subgrid model to supply values for Lighthill's turbulence noise source tensor.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.
Xu, X Q
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2008-07-01
We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.
A computer model of the pediatric circulatory system for testing pediatric assist devices.
Giridharan, Guruprasad A; Koenig, Steven C; Mitchell, Michael; Gartner, Mark; Pantalos, George M
2007-01-01
Lumped parameter computer models of the pediatric circulatory systems for 1- and 4-year-olds were developed to predict hemodynamic responses to mechanical circulatory support devices. Model parameters, including resistance, compliance and volume, were adjusted to match hemodynamic pressure and flow waveforms, pressure-volume loops, percent systole, and heart rate of pediatric patients (n = 6) with normal ventricles. Left ventricular failure was modeled by adjusting the time-varying compliance curve of the left heart to produce aortic pressures and cardiac outputs consistent with those observed clinically. Models of pediatric continuous flow (CF) and pulsatile flow (PF) ventricular assist devices (VAD) and intraaortic balloon pump (IABP) were developed and integrated into the heart failure pediatric circulatory system models. Computer simulations were conducted to predict acute hemodynamic responses to PF and CF VAD operating at 50%, 75% and 100% support and 2.5 and 5 ml IABP operating at 1:1 and 1:2 support modes. The computer model of the pediatric circulation matched the human pediatric hemodynamic waveform morphology to within 90% and cardiac function parameters with 95% accuracy. The computer model predicted PF VAD and IABP restore aortic pressure pulsatility and variation in end-systolic and end-diastolic volume, but diminish with increasing CF VAD support.
Long period nodal motion of sun synchronous orbits
NASA Technical Reports Server (NTRS)
Duck, K. I.
1975-01-01
An approximative model is formulated for assessing these perturbations that significantly affect long term modal motion of sun synchronous orbits. Computer simulations with several independent computer programs consider zonal and tesseral gravitational harmonics, third body gravitational disturbances induced by the sun and the moon, and atmospheric drag. A pendulum model consisting of evenzonal harmonics through order 4 and solar gravity dominated nodal motion approximation. This pendulum motion results from solar gravity inducing an inclination oscillation which couples into the nodal precession induced by the earth's oblateness. The pendulum model correlated well with simulations observed flight data.
NASA Technical Reports Server (NTRS)
Wakelyn, N. T.; Gregory, G. L.
1980-01-01
Data for one day of the 1977 southeastern Virginia urban plume study are compared with computer predictions from a traveling air parcel model using a contemporary photochemical mechanism with a minimal description of nonmethane hydrocarbon (NMHC) constitution and chemistry. With measured initial NOx and O3 concentrations and a current separate estimate of urban source loading input to the model, and for a variation of initial NMHC over a reasonable range, an ozone increase over the day is predicted from the photochemical simulation which is consistent with the flight path averaged airborne data.
Vapor-liquid equilibrium thermodynamics of N2 + CH4 - Model and Titan applications
NASA Technical Reports Server (NTRS)
Thompson, W. R.; Zollweg, John A.; Gabis, David H.
1992-01-01
A thermodynamic model is presented for vapor-liquid equilibrium in the N2 + CH4 system, which is implicated in calculations of the Titan tropospheric clouds' vapor-liquid equilibrium thermodynamics. This model imposes constraints on the consistency of experimental equilibrium data, and embodies temperature effects by encompassing enthalpy data; it readily calculates the saturation criteria, condensate composition, and latent heat for a given pressure-temperature profile of the Titan atmosphere. The N2 content of condensate is about half of that computed from Raoult's law, and about 30 percent greater than that computed from Henry's law.
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
Reference Solutions for Benchmark Turbulent Flows in Three Dimensions
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Pandya, Mohagna J.; Rumsey, Christopher L.
2016-01-01
A grid convergence study is performed to establish benchmark solutions for turbulent flows in three dimensions (3D) in support of turbulence-model verification campaign at the Turbulence Modeling Resource (TMR) website. The three benchmark cases are subsonic flows around a 3D bump and a hemisphere-cylinder configuration and a supersonic internal flow through a square duct. Reference solutions are computed for Reynolds Averaged Navier Stokes equations with the Spalart-Allmaras turbulence model using a linear eddy-viscosity model for the external flows and a nonlinear eddy-viscosity model based on a quadratic constitutive relation for the internal flow. The study involves three widely-used practical computational fluid dynamics codes developed and supported at NASA Langley Research Center: FUN3D, USM3D, and CFL3D. Reference steady-state solutions computed with these three codes on families of consistently refined grids are presented. Grid-to-grid and code-to-code variations are described in detail.
SAMICS Validation. SAMICS Support Study, Phase 3
NASA Technical Reports Server (NTRS)
1979-01-01
SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation. The main computational objectives were: 1. To develop computationally efficient, but physically based, parameterizations of estuary and continental shelf mixing processes for use in an Earth System Model (CESM). 2. Tomore » develop a two-way nested regional modeling framework in order to dynamically downscale the climate response of particular coastal ocean regions and to upscale the impact of the regional coastal processes to the global climate in an Earth System Model (CESM). 3. To develop computational infrastructure to enhance the efficiency of data transfer between specific sources and destinations, i.e., a point-to-point communication capability, (used in objective 1) within POP, the ocean component of CESM.« less
Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub
2017-01-01
A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.
Toward improved calibration of watershed models: multisite many objective measures of information
USDA-ARS?s Scientific Manuscript database
This paper presents a computational framework for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. The framework consists of four components: (i) an a-priori characterization of system behavior; (ii) a formal an...
Comparison of the Melting Temperatures of Classical and Quantum Water Potential Models
NASA Astrophysics Data System (ADS)
Du, Sen; Yoo, Soohaeng; Li, Jinjin
2017-08-01
As theoretical approaches and technical methods improve over time, the field of computer simulations for water has greatly progressed. Water potential models become much more complex when additional interactions and advanced theories are considered. Macroscopic properties of water predicted by computer simulations using water potential models are expected to be consistent with experimental outcomes. As such, discrepancies between computer simulations and experiments could be a criterion to comment on the performances of various water potential models. Notably, water can occur not only as liquid phases but also as solid and vapor phases. Therefore, the melting temperature related to the solid and liquid phase equilibrium is an effective parameter to judge the performances of different water potential models. As a mini review, our purpose is to introduce some water models developed in recent years and the melting temperatures obtained through simulations with such models. Moreover, some explanations referred to in the literature are described for the additional evaluation of the water potential models.
Coherence in the Visual Imagination.
Vertolli, Michael O; Kelly, Matthew A; Davies, Jim
2018-04-01
An incoherent visualization is when aspects of different senses of a word (e.g., the biological "mouse" vs. the computer "mouse") are present in the same visualization (e.g., a visualization of a biological mouse in the same image with a computer tower). We describe and implement a new model of creating contextual coherence in the visual imagination called Coherencer, based on the SOILIE model of imagination. We show that Coherencer is able to generate scene descriptions that are more coherent than SOILIE's original approach as well as a parallel connectionist algorithm that is considered competitive in the literature on general coherence. We also show that co-occurrence probabilities are a better association representation than holographic vectors and that better models of coherence improve the resulting output independent of the association type that is used. Theoretically, we show that Coherencer is consistent with other models of cognitive generation. In particular, Coherencer is a similar, but more cognitively plausible model than the C 3 model of concept combination created by Costello and Keane (2000). We show that Coherencer is also consistent with both the modal schematic indices of perceptual symbol systems theory (Barsalou, 1999) and the amodal contextual constraints of Thagard's (2002) theory of coherence. Finally, we describe how Coherencer is consistent with contemporary research on the hippocampus, and we show evidence that the process of making a visualization coherent is serial. Copyright © 2017 Cognitive Science Society, Inc.
Computing by physical interaction in neurons.
Aur, Dorian; Jog, Mandar; Poznanski, Roman R
2011-12-01
The electrodynamics of action potentials represents the fundamental level where information is integrated and processed in neurons. The Hodgkin-Huxley model cannot explain the non-stereotyped spatial charge density dynamics that occur during action potential propagation. Revealed in experiments as spike directivity, the non-uniform charge density dynamics within neurons carry meaningful information and suggest that fragments of information regarding our memories are endogenously stored in structural patterns at a molecular level and are revealed only during spiking activity. The main conceptual idea is that under the influence of electric fields, efficient computation by interaction occurs between charge densities embedded within molecular structures and the transient developed flow of electrical charges. This process of computation underlying electrical interactions and molecular mechanisms at the subcellular level is dissimilar from spiking neuron models that are completely devoid of physical interactions. Computation by interaction describes a more powerful continuous model of computation than the one that consists of discrete steps as represented in Turing machines.
A computational continuum model of poroelastic beds
Zampogna, G. A.
2017-01-01
Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics. PMID:28413355
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
Computational Intelligence‐Assisted Understanding of Nature‐Inspired Superhydrophobic Behavior
Zhang, Xia; Ding, Bei; Dixon, Sebastian C.
2017-01-01
Abstract In recent years, state‐of‐the‐art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature‐inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials. PMID:29375975
Computational Intelligence-Assisted Understanding of Nature-Inspired Superhydrophobic Behavior.
Zhang, Xia; Ding, Bei; Cheng, Ran; Dixon, Sebastian C; Lu, Yao
2018-01-01
In recent years, state-of-the-art computational modeling of physical and chemical systems has shown itself to be an invaluable resource in the prediction of the properties and behavior of functional materials. However, construction of a useful computational model for novel systems in both academic and industrial contexts often requires a great depth of physicochemical theory and/or a wealth of empirical data, and a shortage in the availability of either frustrates the modeling process. In this work, computational intelligence is instead used, including artificial neural networks and evolutionary computation, to enhance our understanding of nature-inspired superhydrophobic behavior. The relationships between experimental parameters (water droplet volume, weight percentage of nanoparticles used in the synthesis of the polymer composite, and distance separating the superhydrophobic surface and the pendant water droplet in adhesive force measurements) and multiple objectives (water droplet contact angle, sliding angle, and adhesive force) are built and weighted. The obtained optimal parameters are consistent with the experimental observations. This new approach to materials modeling has great potential to be applied more generally to aid design, fabrication, and optimization for myriad functional materials.
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...
2017-07-12
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
EQS Goes R: Simulations for SEM Using the Package REQS
ERIC Educational Resources Information Center
Mair, Patrick; Wu, Eric; Bentler, Peter M.
2010-01-01
The REQS package is an interface between the R environment of statistical computing and the EQS software for structural equation modeling. The package consists of 3 main functions that read EQS script files and import the results into R, call EQS script files from R, and run EQS script files from R and import the results after EQS computations.…
Ozone measurement systems improvements studies
NASA Technical Reports Server (NTRS)
Thomas, R. W.; Guard, K.; Holland, A. C.; Spurling, J. F.
1974-01-01
Results are summarized of an initial study of techniques for measuring atmospheric ozone, carried out as the first phase of a program to improve ozone measurement techniques. The study concentrated on two measurement systems, the electro chemical cell (ECC) ozonesonde and the Dobson ozone spectrophotometer, and consisted of two tasks. The first task consisted of error modeling and system error analysis of the two measurement systems. Under the second task a Monte-Carlo model of the Dobson ozone measurement technique was developed and programmed for computer operation.
Fletcher, Alexander G; Osborne, James M; Maini, Philip K; Gavaghan, David J
2013-11-01
The dynamic behaviour of epithelial cell sheets plays a central role during development, growth, disease and wound healing. These processes occur as a result of cell adhesion, migration, division, differentiation and death, and involve multiple processes acting at the cellular and molecular level. Computational models offer a useful means by which to investigate and test hypotheses about these processes, and have played a key role in the study of cell-cell interactions. However, the necessarily complex nature of such models means that it is difficult to make accurate comparison between different models, since it is often impossible to distinguish between differences in behaviour that are due to the underlying model assumptions, and those due to differences in the in silico implementation of the model. In this work, an approach is described for the implementation of vertex dynamics models, a discrete approach that represents each cell by a polygon (or polyhedron) whose vertices may move in response to forces. The implementation is undertaken in a consistent manner within a single open source computational framework, Chaste, which comprises fully tested, industrial-grade software that has been developed using an agile approach. This framework allows one to easily change assumptions regarding force generation and cell rearrangement processes within these models. The versatility and generality of this framework is illustrated using a number of biological examples. In each case we provide full details of all technical aspects of our model implementations, and in some cases provide extensions to make the models more generally applicable. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity
Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott
2008-01-01
The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic forcing. Copyright 2008 by the American Geophysical Union.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Thomas, W. A.; McAnally, W. H., Jr.
1985-07-01
TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.
2-D Model for Normal and Sickle Cell Blood Microcirculation
NASA Astrophysics Data System (ADS)
Tekleab, Yonatan; Harris, Wesley
2011-11-01
Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].
A Second Order Semi-Discrete Cosserat Rod Model Suitable for Dynamic Simulations in Real Time
NASA Astrophysics Data System (ADS)
Lang, Holger; Linn, Joachim
2009-09-01
We present an alternative approach for a semi-discrete viscoelastic Cosserat rod model that allows both fast dynamic computations within milliseconds and accurate results compared to detailed finite element solutions. The model is able to represent extension, shearing, bending and torsion. For inner dissipation, a consistent damping potential from Antman is chosen. The continuous equations of motion, which consist a system of nonlinear hyperbolic partial differential algebraic equations, are derived from a two dimensional variational principle. The semi-discrete balance equations are obtained by spatial finite difference schemes on a staggered grid and standard index reduction techniques. The right-hand side of the model and its Jacobian can be chosen free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore extremely cheap to evaluate numerically. For the time integration of the system, we use well established stiff solvers. As our model yields computational times within milliseconds, it is suitable for interactive manipulation. It reflects structural mechanics solutions sufficiently correct, as comparison with detailed finite element results shows.
Unfolding of Proteins: Thermal and Mechanical Unfolding
NASA Technical Reports Server (NTRS)
Hur, Joe S.; Darve, Eric
2004-01-01
We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Hwang, H. C.; Bloomberg, H. W.; Lin, D. L.; Schmidt, M. J.; Strickland, D. J.
1979-01-01
The model consists of four phases: single electron dynamics, single electron avalanche, negative streamer development, and tree formation. Numerical algorithms and computer code implementations are presented for the first three phases. An approach to developing a code description of fourth phase is discussed. Numerical results are presented for a crude material model of Teflon.
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
Computer model of Raritan River Basin water-supply system in central New Jersey
Dunne, Paul; Tasker, Gary D.
1996-01-01
This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.
A computational model of the human visual cortex
NASA Astrophysics Data System (ADS)
Albus, James S.
2008-04-01
The brain is first and foremost a control system that is capable of building an internal representation of the external world, and using this representation to make decisions, set goals and priorities, formulate plans, and control behavior with intent to achieve its goals. The computational model proposed here assumes that this internal representation resides in arrays of cortical columns. More specifically, it models each cortical hypercolumn together with its underlying thalamic nuclei as a Fundamental Computational Unit (FCU) consisting of a frame-like data structure (containing attributes and pointers) plus the computational processes and mechanisms required to maintain it. In sensory-processing areas of the brain, FCUs enable segmentation, grouping, and classification. Pointers stored in FCU frames link pixels and signals to objects and events in situations and episodes that are overlaid with meaning and emotional values. In behavior-generating areas of the brain, FCUs make decisions, set goals and priorities, generate plans, and control behavior. Pointers are used to define rules, grammars, procedures, plans, and behaviors. It is suggested that it may be possible to reverse engineer the human brain at the FCU level of fidelity using nextgeneration massively parallel computer hardware and software. Key Words: computational modeling, human cortex, brain modeling, reverse engineering the brain, image processing, perception, segmentation, knowledge representation
Hierarchical Bayesian Models of Subtask Learning
ERIC Educational Resources Information Center
Anglim, Jeromy; Wynton, Sarah K. A.
2015-01-01
The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…
NASA Astrophysics Data System (ADS)
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
Collisional transport across the magnetic field in drift-fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, J., E-mail: jmad@fysik.dtu.dk; Naulin, V.; Nielsen, A. H.
2016-03-15
Drift ordered fluid models are widely applied in studies of low-frequency turbulence in the edge and scrape-off layer regions of magnetically confined plasmas. Here, we show how collisional transport across the magnetic field is self-consistently incorporated into drift-fluid models without altering the drift-fluid energy integral. We demonstrate that the inclusion of collisional transport in drift-fluid models gives rise to diffusion of particle density, momentum, and pressures in drift-fluid turbulence models and, thereby, obviates the customary use of artificial diffusion in turbulence simulations. We further derive a computationally efficient, two-dimensional model, which can be time integrated for several turbulence de-correlation timesmore » using only limited computational resources. The model describes interchange turbulence in a two-dimensional plane perpendicular to the magnetic field located at the outboard midplane of a tokamak. The model domain has two regions modeling open and closed field lines. The model employs a computational expedient model for collisional transport. Numerical simulations show good agreement between the full and the simplified model for collisional transport.« less
Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System
NASA Astrophysics Data System (ADS)
Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao
2017-12-01
A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.
Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model
NASA Astrophysics Data System (ADS)
Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.
2003-06-01
The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.
NASA Astrophysics Data System (ADS)
Krimi, Abdelkader; Rezoug, Mehdi; Khelladi, Sofiane; Nogueira, Xesús; Deligant, Michael; Ramírez, Luis
2018-04-01
In this work, a consistent Smoothed Particle Hydrodynamics (SPH) model to deal with interfacial multiphase fluid flows simulation is proposed. A modification to the Continuum Stress Surface formulation (CSS) [1] to enhance the stability near the fluid interface is developed in the framework of the SPH method. A non-conservative first-order consistency operator is used to compute the divergence of stress surface tensor. This formulation benefits of all the advantages of the one proposed by Adami et al. [2] and, in addition, it can be applied to more than two phases fluid flow simulations. Moreover, the generalized wall boundary conditions [3] are modified in order to be well adapted to multiphase fluid flows with different density and viscosity. In order to allow the application of this technique to wall-bounded multiphase flows, a modification of generalized wall boundary conditions is presented here for using the SPH method. In this work we also present a particle redistribution strategy as an extension of the damping technique presented in [3] to smooth the initial transient phase of gravitational multiphase fluid flow simulations. Several computational tests are investigated to show the accuracy, convergence and applicability of the proposed SPH interfacial multiphase model.
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Jih-Sheng
This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and outputmore » current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.« less
Kononowicz, Andrzej A; Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil
2014-01-23
Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients' interactivity by enriching them with computational models of physiological and pathological processes. The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the description of four integration strategies, and the last element consisted of evaluation profiles specifying the relevant feasibility features and acceptance thresholds for specific purposes. The group of experts who evaluated the virtual patient exemplar found higher integration more interesting, but at the same time they were more concerned with the validity of the result. The observed differences were not statistically significant. This paper outlines a framework for the integration of computational models into virtual patients. The opportunities and challenges of model exploitation are discussed from a number of user perspectives, considering different levels of model integration. The long-term aim for future research is to isolate the most crucial factors in the framework and to determine their influence on the integration outcome.
Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil
2014-01-01
Background Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients’ interactivity by enriching them with computational models of physiological and pathological processes. Objective The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. Methods The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. Results The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the description of four integration strategies, and the last element consisted of evaluation profiles specifying the relevant feasibility features and acceptance thresholds for specific purposes. The group of experts who evaluated the virtual patient exemplar found higher integration more interesting, but at the same time they were more concerned with the validity of the result. The observed differences were not statistically significant. Conclusions This paper outlines a framework for the integration of computational models into virtual patients. The opportunities and challenges of model exploitation are discussed from a number of user perspectives, considering different levels of model integration. The long-term aim for future research is to isolate the most crucial factors in the framework and to determine their influence on the integration outcome. PMID:24463466
A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
Social Protocols for Agile Virtual Teams
NASA Astrophysics Data System (ADS)
Picard, Willy
Despite many works on collaborative networked organizations (CNOs), CSCW, groupware, workflow systems and social networks, computer support for virtual teams is still insufficient, especially support for agility, i.e. the capability of virtual team members to rapidly and cost efficiently adapt the way they interact to changes. In this paper, requirements for computer support for agile virtual teams are presented. Next, an extension of the concept of social protocol is proposed as a novel model supporting agile interactions within virtual teams. The extended concept of social protocol consists of an extended social network and a workflow model.
Upper and lower bounds for semi-Markov reliability models of reconfigurable systems
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.
Predicting neutron damage using TEM with in situ ion irradiation and computer modeling
NASA Astrophysics Data System (ADS)
Kirk, Marquis A.; Li, Meimei; Xu, Donghua; Wirth, Brian D.
2018-01-01
We have constructed a computer model of irradiation defect production closely coordinated with TEM and in situ ion irradiation of Molybdenum at 80 °C over a range of dose, dose rate and foil thickness. We have reexamined our previous ion irradiation data to assign appropriate error and uncertainty based on more recent work. The spatially dependent cascade cluster dynamics model is updated with recent Molecular Dynamics results for cascades in Mo. After a careful assignment of both ion and neutron irradiation dose values in dpa, TEM data are compared for both ion and neutron irradiated Mo from the same source material. Using the computer model of defect formation and evolution based on the in situ ion irradiation of thin foils, the defect microstructure, consisting of densities and sizes of dislocation loops, is predicted for neutron irradiation of bulk material at 80 °C and compared with experiment. Reasonable agreement between model prediction and experimental data demonstrates a promising direction in understanding and predicting neutron damage using a closely coordinated program of in situ ion irradiation experiment and computer simulation.
NASA Astrophysics Data System (ADS)
Nusca, Michael Joseph, Jr.
The effects of various gasdynamic phenomena on the attenuation of an electromagnetic wave propagating through the nonequilibrium chemically reacting air flow field generated by an aerodynamic body travelling at high velocity is investigated. The nonequilibrium flow field is assumed to consist of seven species including nitric oxide ions and free electrons. The ionization of oxygen and nitrogen atoms is ignored. The aerodynamic body considered is a blunt wedge. The nonequilibrium chemically reacting flow field around this body is numerically simulated using a computer code based on computational fluid dynamics. The computer code solves the Navier-Stokes equations including mass diffusion and heat transfer, using a time-marching, explicit Runge-Kutta scheme. A nonequilibrium air kinetics model consisting of seven species and twenty-eight reactions as well as an equilibrium air model consisting of the same seven species are used. The body surface boundaries are considered as adiabatic or isothermal walls, as well as fully-catalytic and non-catalytic surfaces. Both laminar and turbulent flows are considered; wall generated flow turbulence is simulated using an algebraic mixing length model. An electromagnetic wave is considered as originating from an antenna within the body and is effected by the free electrons in the chemically reacting flow. Analysis of the electromagnetics is performed separately from the fluid dynamic analysis using a series solution of Maxwell's equations valid for the propagation of a long-wavelength plane electromagnetic wave through a thin (i.e., in comparison to wavelength) inhomogeneous plasma layer. The plasma layer is the chemically reacting shock layer around the body. The Navier-Stokes equations are uncoupled from Maxwell's equations. The results of this computational study demonstrate for the first time and in a systematic fashion, the importance of several parameters including equilibrium chemistry, nonequilibrium chemical kinetics, the reaction mechanism, flow viscosity, mass diffusion, and wall boundary conditions on modeling wave attenuation resulting from the interaction of an electromagnetic wave with an aerodynamic plasma. Comparison is made with experimental data.
Computational modelling of cellular level metabolism
NASA Astrophysics Data System (ADS)
Calvetti, D.; Heino, J.; Somersalo, E.
2008-07-01
The steady and stationary state inverse problems consist of estimating the reaction and transport fluxes, blood concentrations and possibly the rates of change of some of the concentrations based on data which are often scarce noisy and sampled over a population. The Bayesian framework provides a natural setting for the solution of this inverse problem, because a priori knowledge about the system itself and the unknown reaction fluxes and transport rates can compensate for the insufficiency of measured data, provided that the computational costs do not become prohibitive. This article identifies the computational challenges which have to be met when analyzing the steady and stationary states of multicompartment model for cellular metabolism and suggest stable and efficient ways to handle the computations. The outline of a computational tool based on the Bayesian paradigm for the simulation and analysis of complex cellular metabolic systems is also presented.
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.
2018-01-01
The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Consistency of the adiabatic theorem.
Amin, M H S
2009-06-05
The adiabatic theorem provides the basis for the adiabatic model of quantum computation. Recently the conditions required for the adiabatic theorem to hold have become a subject of some controversy. Here we show that the reported violations of the adiabatic theorem all arise from resonant transitions between energy levels. In the absence of fast driven oscillations the traditional adiabatic theorem holds. Implications for adiabatic quantum computation are discussed.
The Role of Standards in Cloud-Computing Interoperability
2012-10-01
services are not shared outside the organization. CloudStack, Eucalyptus, HP, Microsoft, OpenStack , Ubuntu, and VMWare provide tools for building...center requirements • Developing usage models for cloud ven- dors • Independent IT consortium OpenStack http://www.openstack.org • Open-source...software for running private clouds • Currently consists of three core software projects: OpenStack Compute (Nova), OpenStack Object Storage (Swift
A computer-based time study system for timber harvesting operations
Jingxin Wang; Joe McNeel; John Baumgras
2003-01-01
A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...
NASA Technical Reports Server (NTRS)
Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette
2005-01-01
Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS-calibrated coefficients. With accurate SGS-flux models, namely scale-similarity and dynamic gradient, the FST model allows up to a 32-fold reduction in computational drops compared to the number of physical drops, without degradation of accuracy; a 64-fold reduction leads to a slight decrease in accuracy.
Characterization and Computational Modeling of Minor Phases in Alloy LSHR
NASA Technical Reports Server (NTRS)
Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek
2012-01-01
The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.
A generic biogeochemical module for earth system models
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.-Y.; Leung, L. R.
2013-06-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into earth system models (e.g. community land models - CLM), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into the CLM model. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems.
On 3-D inelastic analysis methods for hot section components (base program)
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1986-01-01
A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.
New Frontiers in Language Evolution and Development.
Oller, D Kimbrough; Dale, Rick; Griebel, Ulrike
2016-04-01
This article introduces the Special Issue and its focus on research in language evolution with emphasis on theory as well as computational and robotic modeling. A key theme is based on the growth of evolutionary developmental biology or evo-devo. The Special Issue consists of 13 articles organized in two sections: A) Theoretical foundations and B) Modeling and simulation studies. All the papers are interdisciplinary in nature, encompassing work in biological and linguistic foundations for the study of language evolution as well as a variety of computational and robotic modeling efforts shedding light on how language may be developed and may have evolved. Copyright © 2016 Cognitive Science Society, Inc.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
NASA Astrophysics Data System (ADS)
Chatziantonaki, Ioanna; Tsironis, Christos; Isliker, Heinz; Vlahos, Loukas
2013-11-01
The most promising technique for the control of neoclassical tearing modes in tokamak experiments is the compensation of the missing bootstrap current with an electron-cyclotron current drive (ECCD). In this frame, the dynamics of magnetic islands has been studied extensively in terms of the modified Rutherford equation (MRE), including the presence of a current drive, either analytically described or computed by numerical methods. In this article, a self-consistent model for the dynamic evolution of the magnetic island and the driven current is derived, which takes into account the island's magnetic topology and its effect on the current drive. The model combines the MRE with a ray-tracing approach to electron-cyclotron wave-propagation and absorption. Numerical results exhibit a decrease in the time required for complete stabilization with respect to the conventional computation (not taking into account the island geometry), which increases by increasing the initial island size and radial misalignment of the deposition.
CFD Analysis of Emissions for a Candidate N+3 Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud
2015-01-01
An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spraymodeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
A model for rotorcraft flying qualities studies
NASA Technical Reports Server (NTRS)
Mittal, Manoj; Costello, Mark F.
1993-01-01
This paper outlines the development of a mathematical model that is expected to be useful for rotorcraft flying qualities research. A computer model is presented that can be applied to a range of different rotorcraft configurations. The algorithm computes vehicle trim and a linear state-space model of the aircraft. The trim algorithm uses non linear optimization theory to solve the nonlinear algebraic trim equations. The linear aircraft equations consist of an airframe model and a flight control system dynamic model. The airframe model includes coupled rotor and fuselage rigid body dynamics and aerodynamics. The aerodynamic model for the rotors utilizes blade element theory and a three state dynamic inflow model. Aerodynamics of the fuselage and fuselage empennages are included. The linear state-space description for the flight control system is developed using standard block diagram data.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Saturated Widths of Magnetic Islands in Tokamak Discharges
NASA Astrophysics Data System (ADS)
Halpern, F.; Pankin, A. Y.
2005-10-01
The new ISLAND module described in reference [1] implements a quasi-linear model to compute the widths of multiple magnetic islands driven by saturated tearing modes in toroidal plasmas of arbitrary aspect ratio and cross sectional shape. The distortion of the island shape caused by the radial variation in the perturbation is computed in the new module. In transport simulations, the enhanced transport caused by the magnetic islands has the effect of flattening the pressure and current density profiles. This self consistent treatment of the magnetic islands alters the development of the plasma profiles. In addition, it is found that islands closer to the magnetic axis influence the evolution of islands further out in the plasma. In order to investigate such phenomena, the ISLAND module is used within the BALDUR predictive modeling code to compute the widths of multiple magnetic islands in tokamak discharges. The interaction between the islands and sawtooth crashes is examined in simulations of DIII-D and JET discharges. The module is used to compute saturated neoclassical tearing mode island widths for multiple modes in ITER. Preliminary results for island widths in ITER are consistent with those presented [2] by Hegna. [1] F.D. Halpern, G. Bateman, A.H. Kritz and A.Y. Pankin, ``The ISLAND Module for Computing Magnetic Island Widths in Tokamaks,'' submitted to J. Plasma Physics (2005). [2] C.C. Hegna, 2002 Fusion Snowmass Meeting.
User's Guide to the Stand Prognosis Model
William R. Wykoff; Nicholas L. Crookston; Albert R. Stage
1982-01-01
The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
NASA Astrophysics Data System (ADS)
Zapoměl, J.; Ferfecki, P.
2016-09-01
A frequently used technological solution for minimization of undesirable effects caused by vibration of rotating machines consists in placing damping devices in the rotor supports. The application of magnetorheological squeeze film dampers enables their optimum performance to be achieved in a wide range of rotating speeds by adapting their damping effect to the current operating conditions. The damping force, which is produced by squeezing the layer of magnetorheological oil, can be controlled by changing magnetic flux passing through the lubricant. The force acting between the rotor and its frame is transmitted through the rolling element bearing, the lubricating layer and the squirrel spring. The loading of the bearing produces a time variable friction moment, energy losses, uneven rotor running, and has an influence on the rotor service life and the current fluctuation in electric circuits. The carried out research consisted in the development of a mathematical model of a magnetorheological squeeze film damper, its implementation into the computational models of rotor systems, and in performing the study on the dependence of the energy losses and variation of the friction moment on the damping force and its control. The new and computationally stable mathematical model of a magnetorheological squeeze film damper, its implementation in the computational models of rigid rotors and learning more on the energy losses generated in the rotor supports in dependence on the damping effect are the principal contributions of this paper. The results of the computational simulations prove that a suitable control of the damping force enables the energy losses to be reduced in a wide velocity range.
A New Direct Coupled Regional-scale Meteorology and Chemistry Model
NASA Astrophysics Data System (ADS)
Li, J.; Hsu, S.; Liu, T.; Chiang, C.; Chang, J.
2007-12-01
WRF/Chem was first developed in the US and generously made available to the international research community a short time ago. Starting from this, many groups have contributed new components and subroutines to this model. Based on WRF/Chem, a new online integrated model system named WRF/ChemT was established in Taiwan. It is significantly different from WRF/Chem in the following important aspects. For an online model, all chemical species emission must be direct coupled to WRF meteorology. All publicly available versions of WRF/Chem do not have this fundamental coupling. For these WRF/Chem models all emission data must first be preprocessed by SMOKE or other emission models driven by MM5 or WRF meteorologies in offline manner. WRF/ChemT has a self-consistent online emission process. We replaced the old emission driver with NCU driver, the plume rise of point sources and biogenic VOCs emission are calculated online. So that meteorology model, emission model and chemistry transport model are coupled directly in WRF/ChemT. Cloud impact on actinic flux should be consistent with WRF cloud-aerosol submodel used, not just moisture parameterization. Photolysis rates in WRF/ChemT are self consistent in every sub modules. New dry deposition routines were developed including addition of a vertical mixing scheme named the Asymmetrical Convective Model (ACM) which is used in CMAQ. The advantage of using ACM submodel had been demonstrated in earlier studies. Computational inefficiency has been a lingering problem for WRF/Chem. We have worked on this aspect of WRF/Chem development and by using a new chemical solver and also reorganizing the operator splitting computational algorithm we have made significant computational speed gain. WRF/chemT is about a factor of 4 faster in the chemistry solver and a factor of 2 faster in chemical species transport. When added together it is about a factor of 2 faster than WRF/Chem(version 2.1.2), i. e. gas-phase chemistry and meteorology are now equally fast. WRF/ChemT was evaluated and applied in regional air quality research in Taiwan. The comparison with WRF/Chem and selected current applications will be discussed in this report.
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin; Zurek, Wojciech H.
2006-06-01
We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.
NASA Technical Reports Server (NTRS)
Welp, D. W.; Brown, R. A.; Ullman, D. G.; Kuhner, M. B.
1974-01-01
A computer simulation program which models a commercial short-haul aircraft operating in the civil air system was developed. The purpose of the program is to evaluate the effect of a given aircraft avionics capability on the ability of the aircraft to perform on-time carrier operations. The program outputs consist primarily of those quantities which can be used to determine direct operating costs. These include: (1) schedule reliability or delays, (2) repairs/replacements, (3) fuel consumption, and (4) cancellations. More comprehensive models of the terminal area environment were added and a simulation of an existing airline operation was conducted to obtain a form of model verification. The capability of the program to provide comparative results (sensitivity analysis) was then demonstrated by modifying the aircraft avionics capability for additional computer simulations.
Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Moses, Gregory
2017-10-01
The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Automatic Reconstruction of Spacecraft 3D Shape from Imagery
NASA Astrophysics Data System (ADS)
Poelman, C.; Radtke, R.; Voorhees, H.
We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.
Computation of the bluff-body sound generation by a self-consistent mean flow formulation
NASA Astrophysics Data System (ADS)
Fani, A.; Citro, V.; Giannetti, F.; Auteri, F.
2018-03-01
The sound generated by the flow around a circular cylinder is numerically investigated by using a finite-element method. In particular, we study the acoustic emissions generated by the flow past the bluff body at low Mach and Reynolds numbers. We perform a global stability analysis by using the compressible linearized Navier-Stokes equations. The resulting direct global mode provides detailed information related to the underlying hydrodynamic instability and data on the acoustic field generated. In order to recover the intensity of the produced sound, we apply the self-consistent model for non-linear saturation proposed by Mantič-Lugo, Arratia, and Gallaire ["Self-consistent mean flow description of the nonlinear saturation of the vortex shedding in the cylinder wake," Phys. Rev. Lett. 113, 084501 (2014)]. The application of this model allows us to compute the amplitude of the resulting linear mode and the effects of saturation on the mode structure and acoustic field. Our results show excellent agreement with those obtained by a full compressible simulation direct numerical simulation and those derived by the application of classical acoustic analogy formulations.
Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, X Q
We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less
Blood Flow in Idealized Vascular Access for Hemodialysis: A Review of Computational Studies.
Ene-Iordache, Bogdan; Remuzzi, Andrea
2017-09-01
Although our understanding of the failure mechanism of vascular access for hemodialysis has increased substantially, this knowledge has not translated into successful therapies. Despite advances in technology, it is recognized that vascular access is difficult to maintain, due to complications such as intimal hyperplasia. Computational studies have been used to estimate hemodynamic changes induced by vascular access creation. Due to the heterogeneity of patient-specific geometries, and difficulties with obtaining reliable models of access vessels, idealized models were often employed. In this review we analyze the knowledge gained with the use of computational such simplified models. A review of the literature was conducted, considering studies employing a computational fluid dynamics approach to gain insights into the flow field phenotype that develops in idealized models of vascular access. Several important discoveries have originated from idealized model studies, including the detrimental role of disturbed flow and turbulent flow, and the beneficial role of spiral flow in intimal hyperplasia. The general flow phenotype was consistent among studies, but findings were not treated homogeneously since they paralleled achievements in cardiovascular biomechanics which spanned over the last two decades. Computational studies in idealized models are important for studying local blood flow features and evaluating new concepts that may improve the patency of vascular access for hemodialysis. For future studies we strongly recommend numerical modelling targeted at accurately characterizing turbulent flows and multidirectional wall shear disturbances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callahan, M.A.
Three major issues to be dealt with over the next ten years in the exposure assessment field are: consistency in terminology, the impact of computer technology on the choice of data and modeling, and conceptual issues such as the use of time-weighted averages.
Applications of a Pharmacokinetic Simulation Program in Pharmacy Courses.
ERIC Educational Resources Information Center
Ingram, D.; And Others
1979-01-01
Presents a multicompartment model which illustrates aspects of drug absorption, distribution, and elimination in the human body for a course in pharmacokinetics. The course work consists of the interpretation of computer generated simulated data. (Author/CMV)
Computations of turbulent lean premixed combustion using conditional moment closure
NASA Astrophysics Data System (ADS)
Amzin, Shokri; Swaminathan, Nedunchezhian
2013-12-01
Conditional Moment Closure (CMC) is a suitable method for predicting scalars such as carbon monoxide with slow chemical time scales in turbulent combustion. Although this method has been successfully applied to non-premixed combustion, its application to lean premixed combustion is rare. In this study the CMC method is used to compute piloted lean premixed combustion in a distributed combustion regime. The conditional scalar dissipation rate of the conditioning scalar, the progress variable, is closed using an algebraic model and turbulence is modelled using the standard k-ɛ model. The conditional mean reaction rate is closed using a first order CMC closure with the GRI-3.0 chemical mechanism to represent the chemical kinetics of methane oxidation. The PDF of the progress variable is obtained using a presumed shape with the Beta function. The computed results are compared with the experimental measurements and earlier computations using the transported PDF approach. The results show reasonable agreement with the experimental measurements and are consistent with the transported PDF computations. When the compounded effects of shear-turbulence and flame are strong, second order closures may be required for the CMC.
Computational models for the analysis of three-dimensional internal and exhaust plume flowfields
NASA Technical Reports Server (NTRS)
Dash, S. M.; Delguidice, P. D.
1977-01-01
This paper describes computational procedures developed for the analysis of three-dimensional supersonic ducted flows and multinozzle exhaust plume flowfields. The models/codes embodying these procedures cater to a broad spectrum of geometric situations via the use of multiple reference plane grid networks in several coordinate systems. Shock capturing techniques are employed to trace the propagation and interaction of multiple shock surfaces while the plume interface, separating the exhaust and external flows, and the plume external shock are discretely analyzed. The computational grid within the reference planes follows the trace of streamlines to facilitate the incorporation of finite-rate chemistry and viscous computational capabilities. Exhaust gas properties consist of combustion products in chemical equilibrium. The computational accuracy of the models/codes is assessed via comparisons with exact solutions, results of other codes and experimental data. Results are presented for the flows in two-dimensional convergent and divergent ducts, expansive and compressive corner flows, flow in a rectangular nozzle and the plume flowfields for exhausts issuing out of single and multiple rectangular nozzles.
Discrete Roughness Effects on Shuttle Orbiter at Mach 6
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Hamilton, H. Harris, II
2002-01-01
Discrete roughness boundary layer transition results on a Shuttle Orbiter model in the NASA Langley Research Center 20-Inch Mach 6 Air Tunnel have been reanalyzed with new boundary layer calculations to provide consistency for comparison to other published results. The experimental results were previously obtained utilizing the phosphor thermography system to monitor the status of the boundary layer via global heat transfer images of the Orbiter windward surface. The size and location of discrete roughness elements were systematically varied along the centerline of the 0.0075-scale model at an angle of attack of 40 deg and the boundary layer response recorded. Various correlative approaches were attempted, with the roughness transition correlations based on edge properties providing the most reliable results. When a consistent computational method is used to compute edge conditions, transition datasets for different configurations at several angles of attack have been shown to collapse to a well-behaved correlation.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
2015-12-31
The effects from solvent environment are specific to the electronic states; therefore, a computational scheme for solvent effects consistent with the electronic states is necessary to discuss electronic excitation of molecules in solution. The PCM (polarizable continuum model) SAC (symmetry-adapted cluster) and SAC-CI (configuration interaction) methods are developed for such purposes. The PCM SAC-CI adopts the state-specific (SS) solvation scheme where solvent effects are self-consistently considered for every ground and excited states. For efficient computations of many excited states, we develop a perturbative approximation for the PCM SAC-CI method, which is called corrected linear response (cLR) scheme. Our test calculationsmore » show that the cLR PCM SAC-CI is a very good approximation of the SS PCM SAC-CI method for polar and nonpolar solvents.« less
Reflector antennas with low sidelobes, low cross polarization, and high aperture efficiency
NASA Technical Reports Server (NTRS)
Faigen, I. M.; Reichert, C. F.; Sletten, C. J.; Shore, R. A.
1984-01-01
Techniques are presented for computing the horn near field patterns on the subreflectors and for correcting the phase center errors of the horn pattern by shaping the subreflector surface. The diffraction pattern computations for scanned beams are described. The effects of dish aperture diffraction on pattern bandwidth are investigated. A model antenna consisting of a reflector, shaped subreflector, and corrugated feed horn is described.
Principles and Foundations for Fractionated Networked Cyber-Physical Systems
2012-07-13
spectrum between autonomy to cooperation. Our distributed comput- ing model is based on distributed knowledge sharing, and makes very few assumptions but...over the computation without the need for explicit migration. Randomization techniques will make sure that enough di- versity is maintained to allow...small UAV testbed consisting of 10 inex- pensive quadcopters at SRI. Hard ware-wise, we added heat sinks to mitigate the impact of additional heat that
Numerical methods in Markov chain modeling
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef; Stewart, William J.
1989-01-01
Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
Ito, Yasushi; Cheng, Gary C.; Shih, Alan M.; Koomullil, Roy P.; Soni, Bharat K.; Sittitavornwong, Somsak; Waite, Peter D.
2011-01-01
The objective of this paper is the reconstruction of upper airway geometric models as hybrid meshes from clinically used Computed Tomography (CT) data sets in order to understand the dynamics and behaviors of the pre- and postoperative upper airway systems of Obstructive Sleep Apnea Syndrome (OSAS) patients by viscous Computational Fluid Dynamics (CFD) simulations. The selection criteria for OSAS cases studied are discussed because two reasonable pre- and postoperative upper airway models for CFD simulations may not be created for every case without a special protocol for CT scanning. The geometry extraction and manipulation methods are presented with technical barriers that must be overcome so that they can be used along with computational simulation software as a daily clinical evaluation tool. Eight cases are presented in this paper, and each case consists of pre- and postoperative configurations. The results of computational simulations of two cases are included in this paper as demonstration. PMID:21625395
On the computation of molecular surface correlations for protein docking using fourier techniques.
Sakk, Eric
2007-08-01
The computation of surface correlations using a variety of molecular models has been applied to the unbound protein docking problem. Because of the computational complexity involved in examining all possible molecular orientations, the fast Fourier transform (FFT) (a fast numerical implementation of the discrete Fourier transform (DFT)) is generally applied to minimize the number of calculations. This approach is rooted in the convolution theorem which allows one to inverse transform the product of two DFTs in order to perform the correlation calculation. However, such a DFT calculation results in a cyclic or "circular" correlation which, in general, does not lead to the same result as the linear correlation desired for the docking problem. In this work, we provide computational bounds for constructing molecular models used in the molecular surface correlation problem. The derived bounds are then shown to be consistent with various intuitive guidelines previously reported in the protein docking literature. Finally, these bounds are applied to different molecular models in order to investigate their effect on the correlation calculation.
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
K-TIF: a two-fluid computer program for downcomer flow dynamics. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amsden, A.A.; Harlow, F.H.
1977-10-01
The K-TIF computer program has been developed for numerical solution of the time-varying dynamics of steam and water in a pressurized water reactor downcomer. The current status of physical and mathematical modeling is presented in detail. The report also contains a complete description of the numerical solution technique, a full description and listing of the computer program, instructions for its use, with a sample printout for a specific test problem. A series of calculations, performed with no change in the modeling parameters, shows consistent agreement with the experimental trends over a wide range of conditions, which gives confidence to themore » calculations as a basis for investigating the complicated physics of steam-water flows in the downcomer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek
A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less
Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; ...
2016-06-27
A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less
Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; Du, Juan; Davis, Jonathan K.; Lee, Jung-Goo; Slabaugh, Erin; Vandavasi, Venu Gopal; O’Neill, Hugh; Roberts, Eric M.; Roberts, Alison W.; Yingling, Yaroslava G.; Haigler, Candace H.
2016-01-01
A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individual lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In summary, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains. PMID:27345599
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Modeling macro-and microstructures of Gas-Metal-Arc Welded HSLA-100 steel
NASA Astrophysics Data System (ADS)
Yang, Z.; Debroy, T.
1999-06-01
Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence, thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of the calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, “finger” penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstätten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.
Application of a computational decision model to examine acute drug effects on human risk taking.
Lane, Scott D; Yechiam, Eldad; Busemeyer, Jerome R
2006-05-01
In 3 previous experiments, high doses of alcohol, marijuana, and alprazolam acutely increased risky decision making by adult humans in a 2-choice (risky vs. nonrisky) laboratory task. In this study, a computational modeling analysis known as the expectancy valence model (J. R. Busemeyer & J. C. Stout, 2002) was applied to individual-participant data from these studies, for the highest administered dose of all 3 drugs and corresponding placebo doses, to determine changes in decision-making processes that may be uniquely engendered by each drug. The model includes 3 parameters: responsiveness to rewards and losses (valence or motivation); the rate of updating expectancies about the value of risky alternatives (learning/memory); and the consistency with which trial-by-trial choices match expected outcomes (sensitivity). Parameter estimates revealed 3 key outcomes: Alcohol increased responsiveness to risky rewards and decreased responsiveness to risky losses (motivation) but did not alter expectancy updating (learning/memory); both marijuana and alprazolam produced increases in risk taking that were related to learning/memory but not motivation; and alcohol and marijuana (but not alprazolam) produced more random response patterns that were less consistently related to expected outcomes on the 2 choices. No significant main effects of gender or dose by gender interactions were obtained, but 2 dose by gender interactions approached significance. These outcomes underscore the utility of using a computational modeling approach to deconstruct decision-making processes and thus better understand drug effects on risky decision making in humans.
Validating Computational Human Behavior Models: Consistency and Accuracy Issues
2004-06-01
includes a discussion of SME demographics, content, and organization of the datasets . This research generalizes data from two pilot studies and two base...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency
NASA Astrophysics Data System (ADS)
Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu
2018-03-01
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Steady-State Groundwater Flow Model for Great Neck, Long Island, New York
NASA Astrophysics Data System (ADS)
Chowdhury, S. H.; Klinger, D.; Sallemi, B. M.
2001-12-01
This paper describes a comprehensive groundwater flow model for the Great Neck section of Long Island, New York. The hydrogeology of this section of Long Island is dominated by a buried erosional valley consisting of sediments comparable to the North Shore Confining Unit. This formation cross-cuts, thus is in direct hydraulic connection with the Upper Glacial, North Shore Confining Unit, Raritan Clay, and Lloyd aquifers. The Magothy aquifer is present only in remote southern sections of the model area. In addition, various lenses of coarser material from the overlying Upper Glacial aquifer are dispersed throughout the area. Data collection consisted of gathering various parameter values from existing USGS reports. Hydraulic conductivity, porosity, estimated recharge values, evapotranspiration, well locations, and water level data have all been gathered from the USGS Office located in Coram, New York. Appropriate modeling protocol was followed throughout the modeling process. The computer code utilized for solving this numerical model is Visual MODFLOW as manufactured by Waterloo Hydrogeologic. Calibration and a complete sensitivity analysis were conducted. Modeled results indicate that the groundwater flow direction is consistent with what is viewed onsite. In addition, the model is consistent in returning favorable parameter results to historical data.
Scalable gastroscopic video summarization via similar-inhibition dictionary selection.
Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin
2016-01-01
This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.
Digital morphogenesis via Schelling segregation
NASA Astrophysics Data System (ADS)
Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew
2018-04-01
Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.
New 3D model for dynamics modeling
NASA Astrophysics Data System (ADS)
Perez, Alain
1994-05-01
The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.
Method of locating underground mines fires
Laage, Linneas; Pomroy, William
1992-01-01
An improved method of locating an underground mine fire by comparing the pattern of measured combustion product arrival times at detector locations with a real time computer-generated array of simulated patterns. A number of electronic fire detection devices are linked thru telemetry to a control station on the surface. The mine's ventilation is modeled on a digital computer using network analysis software. The time reguired to locate a fire consists of the time required to model the mines' ventilation, generate the arrival time array, scan the array, and to match measured arrival time patterns to the simulated patterns.
NASA Technical Reports Server (NTRS)
Gallenstein, J.; Huston, R. L.
1973-01-01
This paper presents an analysis of swimming motion with specific attention given to the flutter kick, the breast-stroke kick, and the breast stroke. The analysis is completely theoretical. It employs a mathematical model of the human body consisting of frustrums of elliptical cones. Dynamical equations are written for this model including both viscous and inertia forces. These equations are then applied with approximated swimming strokes and solved numerically using a digital computer. The procedure is to specify the input of the swimming motion. The computer solution then provides the output displacement, velocity, and rotation or body roll of the swimmer.
Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems
NASA Technical Reports Server (NTRS)
Bujorianu, Marius C.; Bujorianu, Manuela L.
2009-01-01
In this paper, we sketch a framework for interdisciplinary modeling of space systems, by proposing a holistic view. We consider different system dimensions and their interaction. Specifically, we study the interactions between computation, physics, communication, uncertainty and autonomy. The most comprehensive computational paradigm that supports a holistic perspective on autonomous space systems is given by cyber-physical systems. For these, the state of art consists of collaborating multi-engineering efforts that prompt for an adequate formal foundation. To achieve this, we propose a leveraging of the traditional content of formal modeling by a co-engineering process.
Standard model anatomy of WIMP dark matter direct detection. I. Weak-scale matching
NASA Astrophysics Data System (ADS)
Hill, Richard J.; Solon, Mikhail P.
2015-02-01
We present formalism necessary to determine weak-scale matching coefficients in the computation of scattering cross sections for putative dark matter candidates interacting with the Standard Model. We pay particular attention to the heavy-particle limit. A consistent renormalization scheme in the presence of nontrivial residual masses is implemented. Two-loop diagrams appearing in the matching to gluon operators are evaluated. Details are given for the computation of matching coefficients in the universal limit of WIMP-nucleon scattering for pure states of arbitrary quantum numbers, and for singlet-doublet and doublet-triplet mixed states.
Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H
2002-12-18
Representation of clinical practice guidelines in a computer-interpretable format is a critical issue for guideline development, implementation, and evaluation. We studied 11 types of guideline representation models that can be used to encode guidelines in computer-interpretable formats. We have consistently found in all reviewed models that primitives for representation of actions and decisions are necessary components of a guideline representation model. Patient states and execution states are important concepts that closely relate to each other. Scheduling constraints on representation primitives can be modeled as sequences, concurrences, alternatives, and loops in a guideline's application process. Nesting of guidelines provides multiple views to a guideline with different granularities. Integration of guidelines with electronic medical records can be facilitated by the introduction of a formal model for patient data. Data collection, decision, patient state, and intervention constitute four basic types of primitives in a guideline's logic flow. Decisions clarify our understanding on a patient's clinical state, while interventions lead to the change from one patient state to another.
Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid
2017-12-01
Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25 s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
CSI computer system/remote interface unit acceptance test results
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.
1992-01-01
The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.
Borresen, Jon; Lynch, Stephen
2012-01-01
In the 1940s, the first generation of modern computers used vacuum tube oscillators as their principle components, however, with the development of the transistor, such oscillator based computers quickly became obsolete. As the demand for faster and lower power computers continues, transistors are themselves approaching their theoretical limit and emerging technologies must eventually supersede them. With the development of optical oscillators and Josephson junction technology, we are again presented with the possibility of using oscillators as the basic components of computers, and it is possible that the next generation of computers will be composed almost entirely of oscillatory devices. Here, we demonstrate how coupled threshold oscillators may be used to perform binary logic in a manner entirely consistent with modern computer architectures. We describe a variety of computational circuitry and demonstrate working oscillator models of both computation and memory. PMID:23173034
NASA Astrophysics Data System (ADS)
Koehl, Patrice; Orland, Henri; Delarue, Marc
2011-08-01
We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.
Incremental improvements to the trout S9 biotransformation assay
In vitro substrate depletion methods have been used in conjunction with computational models to predict biotransformation impacts on chemical accumulation by fish. There is a consistent trend, however, toward overestimation of measured chemical residues resulting from controlled...
HUMAN--A Comprehensive Physiological Model.
ERIC Educational Resources Information Center
Coleman, Thomas G.; Randall, James E.
1983-01-01
Describes computer program (HUMAN) used to simulate physiological experiments on patient pathology. Program (available from authors, including versions for microcomputers) consists of dynamic interactions of over 150 physiological variables and integrating approximations of cardiovascular, renal, lung, temperature regulation, and some hormone…
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
A computational study of pyrolysis reactions of lignin model compounds
Thomas Elder
2010-01-01
Enthalpies of reaction for the initial steps in the pyrolysis of lignin have been evaluated at the CBS-4m level of theory using fully substituted b-O-4 dilignols. Values for competing unimolecular decomposition reactions are consistent with results previously published for phenethyl phenyl ether models, but with lowered selectivity. Chain propagating reactions of free...
The Role of Human Intelligence in Computer-Based Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Epstein, Kenneth; Hillegeist, Eleanor
An Intelligent Tutoring System (ITS) consists of an expert problem-solving program in a subject domain, a tutoring model capable of remediation or primary instruction, and an assessment model that monitors student understanding. The Geometry Proof Tutor (GPT) is an ITS which was developed at Carnegie Mellon University and field tested in the…
GPU-based real-time soft tissue deformation with cutting and haptic feedback.
Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane
2010-12-01
This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
LASSIE: simulating large-scale models of biochemical systems on GPUs.
Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo
2017-05-10
Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE is able to perform fast simulations of even larger models, whereby the tested CPU-implementation of LSODA failed to reach termination. LASSIE is therefore expected to make an important breakthrough in Systems Biology applications, for the execution of faster and in-depth computational analyses of large-scale models of complex biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
Design Of Computer Based Test Using The Unified Modeling Language
NASA Astrophysics Data System (ADS)
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
Improved atmospheric 3D BSDF model in earthlike exoplanet using ray-tracing based method
NASA Astrophysics Data System (ADS)
Ryu, Dongok; Kim, Sug-Whan; Seong, Sehyun
2012-10-01
The studies on planetary radiative transfer computation have become important elements to disk-averaged spectral characterization of potential exoplanets. In this paper, we report an improved ray-tracing based atmospheric simulation model as a part of 3-D earth-like planet model with 3 principle sub-components i.e. land, sea and atmosphere. Any changes in ray paths and their characteristics such as radiative power and direction are computed as they experience reflection, refraction, transmission, absorption and scattering. Improved atmospheric BSDF algorithms uses Q.Liu's combined Rayleigh and aerosol Henrey-Greenstein scattering phase function. The input cloud-free atmosphere model consists of 48 layers with vertical absorption profiles and a scattering layer with their input characteristics using the GIOVANNI database. Total Solar Irradiance data are obtained from Solar Radiation and Climate Experiment (SORCE) mission. Using aerosol scattering computation, we first tested the atmospheric scattering effects with imaging simulation with HRIV, EPOXI. Then we examined the computational validity of atmospheric model with the measurements of global, direct and diffuse radiation taken from NREL(National Renewable Energy Laboratory)s pyranometers and pyrheliometers on a ground station for cases of single incident angle and for simultaneous multiple incident angles of the solar beam.
An autonomous molecular computer for logical control of gene expression.
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2004-05-27
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems. Recently, simple molecular-scale autonomous programmable computers were demonstrated allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for 'logical' control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug.
A computational study of liposome logic: towards cellular computing from the bottom up
Smaldon, James; Romero-Campero, Francisco J.; Fernández Trillo, Francisco; Gheorghe, Marian; Alexander, Cameron
2010-01-01
In this paper we propose a new bottom-up approach to cellular computing, in which computational chemical processes are encapsulated within liposomes. This “liposome logic” approach (also called vesicle computing) makes use of supra-molecular chemistry constructs, e.g. protocells, chells, etc. as minimal cellular platforms to which logical functionality can be added. Modeling and simulations feature prominently in “top-down” synthetic biology, particularly in the specification, design and implementation of logic circuits through bacterial genome reengineering. The second contribution in this paper is the demonstration of a novel set of tools for the specification, modelling and analysis of “bottom-up” liposome logic. In particular, simulation and modelling techniques are used to analyse some example liposome logic designs, ranging from relatively simple NOT gates and NAND gates to SR-Latches, D Flip-Flops all the way to 3 bit ripple counters. The approach we propose consists of specifying, by means of P systems, gene regulatory network-like systems operating inside proto-membranes. This P systems specification can be automatically translated and executed through a multiscaled pipeline composed of dissipative particle dynamics (DPD) simulator and Gillespie’s stochastic simulation algorithm (SSA). Finally, model selection and analysis can be performed through a model checking phase. This is the first paper we are aware of that brings to bear formal specifications, DPD, SSA and model checking to the problem of modeling target computational functionality in protocells. Potential chemical routes for the laboratory implementation of these simulations are also discussed thus for the first time suggesting a potentially realistic physiochemical implementation for membrane computing from the bottom-up. PMID:21886681
NASA Astrophysics Data System (ADS)
Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf
2018-03-01
The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.
Saliency detection using mutual consistency-guided spatial cues combination
NASA Astrophysics Data System (ADS)
Wang, Xin; Ning, Chen; Xu, Lizhong
2015-09-01
Saliency detection has received extensive interests due to its remarkable contribution to wide computer vision and pattern recognition applications. However, most existing computational models are designed for detecting saliency in visible images or videos. When applied to infrared images, they may suffer from limitations in saliency detection accuracy and robustness. In this paper, we propose a novel algorithm to detect visual saliency in infrared images by mutual consistency-guided spatial cues combination. First, based on the luminance contrast and contour characteristics of infrared images, two effective saliency maps, i.e., the luminance contrast saliency map and contour saliency map are constructed, respectively. Afterwards, an adaptive combination scheme guided by mutual consistency is exploited to integrate these two maps to generate the spatial saliency map. This idea is motivated by the observation that different maps are actually related to each other and the fusion scheme should present a logically consistent view of these maps. Finally, an enhancement technique is adopted to incorporate spatial saliency maps at various scales into a unified multi-scale framework to improve the reliability of the final saliency map. Comprehensive evaluations on real-life infrared images and comparisons with many state-of-the-art saliency models demonstrate the effectiveness and superiority of the proposed method for saliency detection in infrared images.
Galilean satellite geomorphology
NASA Technical Reports Server (NTRS)
Malin, M. C.
1983-01-01
Research on this task consisted of the development and initial application of photometric and photoclinometric models using interactive computer image processing and graphics. New programs were developed to compute viewing and illumination angles for every picture element in a Voyager image using C-matrices and final Voyager ephemerides. These values were then used to transform each pixel to an illumination-oriented coordinate system. An iterative integration routine permits slope displacements to be computed from brightness variations, and correlated in the cross-sun direction, resulting in two dimensional topographic data. Figure 1 shows a 'wire-mesh' view of an impact crater on Ganymede, shown with a 10-fold vertical exaggeration. The crater, about 20 km in diameter, has a central mound and raised interior floor suggestive of viscous relaxation and rebound of the crater's topography. In addition to photoclinometry, the computer models that have been developed permit an examination on non-topographically-derived variations in surface brightness.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Plank, G; Prassl, AJ; Augustin, C
2014-01-01
Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050
Tuo, Rui; Jeff Wu, C. F.
2016-07-19
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
Virtual gonio-spectrophotometer for validation of BRDF designs
NASA Astrophysics Data System (ADS)
Mihálik, Andrej; Ďurikovič, Roman
2011-10-01
Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.
A method for interactive satellite failure diagnosis: Towards a connectionist solution
NASA Technical Reports Server (NTRS)
Bourret, P.; Reggia, James A.
1989-01-01
Various kinds of processes which allow one to make a diagnosis are analyzed. The analyses then focuses on one of these processes used for satellite failure diagnosis. This process consists of sending the satellite instructions about system status alterations: to mask the effects of one possible component failure or to look for additional abnormal measures. A formal model of this process is given. This model is an extension of a previously defined connectionist model which allows computation of ratios between the likelihoods of observed manifestations according to various diagnostic hypotheses. The expected mean value of these likelihood measures for each possible status of the satellite can be computed in a similar way. Therefore, it is possible to select the most appropriate status according to three different purposes: to confirm an hypothesis, to eliminate an hypothesis, or to choose between two hypotheses. Finally, a first connectionist schema of computation of these expected mean values is given.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Petri-net-based 2D design of DNA walker circuits.
Gilbert, David; Heiner, Monika; Rohr, Christian
2018-01-01
We consider localised DNA computation, where a DNA strand walks along a binary decision graph to compute a binary function. One of the challenges for the design of reliable walker circuits consists in leakage transitions, which occur when a walker jumps into another branch of the decision graph. We automatically identify leakage transitions, which allows for a detailed qualitative and quantitative assessment of circuit designs, design comparison, and design optimisation. The ability to identify leakage transitions is an important step in the process of optimising DNA circuit layouts where the aim is to minimise the computational error inherent in a circuit while minimising the area of the circuit. Our 2D modelling approach of DNA walker circuits relies on coloured stochastic Petri nets which enable functionality, topology and dimensionality all to be integrated in one two-dimensional model. Our modelling and analysis approach can be easily extended to 3-dimensional walker systems.
Study of Magnetospheric Currents and Resultant Surface Magnetic Variations.
1980-04-17
compressed as indicated by solar -wind data, automatically injected a ring current with a strength consistent with the observed Dst. The computed inner...Fig!ire 2?; bottoim panal of Figure 5_. Agreement is very acceptahle. The model overestimated the maximum depression of RX, but by a factor that is well...Contributed Papers Presented at the Solar -Terrestrial Physics Symposium, Innsbruck, 1978. Harel, M., R. A. Wolf, P. H. Reiff and M. Siiddy, Computer
Expansion of Enterprise Requirements and Acquisition Model
2012-06-04
upgrades in technology that made it more lethal with a smaller force. Computer technology, GPS, and stealth are just a few examples that allowed...The facility consists of banks of networked computers , large displays all built around a centralized workspace. It can be seen in Figure 3. The...first was to meet a gap in UHF satellite communciations for the Navy. This was satisfied as a Tier-1 program by purchasing additional bandwidth
Wide baseline stereo matching based on double topological relationship consistency
NASA Astrophysics Data System (ADS)
Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang
2009-07-01
Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.
Dempster, G.R.; Lutz, Gale A.
1968-01-01
Water-discharge, velocity, and slope variations for a 3.7-mile-Iong tidal reach of the Willamette River at Portland, Oreg., were defined from discharge measurements and river stage data collected between July 1962 and January 1965. Observed water discharge during tide-affected flows, during floods, and during backwater from the Columbia River and recorded stages at each end of the river reach were used to determine water discharge from two mathematical models. These models use a finite-difference method to solve the equations of moderately unsteady open-channel streamflow, and discharges are computed by an electronic digital computer. Discharges computed by using the mathematical models compare satisfactorily with observed discharges, except during the period of backwater from the annual flood of the Columbia River. The flow resistance coefficients used in the models vary with discharge; for one model, the coefficients for discharges above 30,000 cfs (cubic feet per second) are 12 and 24 percent less than the coefficient used for discharges below 30,000 cfs. Daily mean discharges were determined by use of one mathematical model for approximately two-thirds of the water year, October 1963 through September 1964. Agreement of computed with routed daily mean discharges is fair; above 30,000 cfs, average differences between the two discharges are about 10 percent, and below 30,000 cfs, computed daily discharges are consistently greater (by as much as 25 percent) than routed discharges. The other model was used to compute discharges for the unusually high flood flows of December 1964.
Consistent three-equation model for thin films
NASA Astrophysics Data System (ADS)
Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul
2017-11-01
Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.
Petri net modeling of high-order genetic systems using grammatical evolution.
Moore, Jason H; Hahn, Lance W
2003-11-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two DNA sequence variations. In the present study, we evaluate whether the Petri net approach is capable of identifying biochemical networks that are consistent with disease susceptibility due to higher order nonlinear interactions between three DNA sequence variations. The results indicate that our model-building approach is capable of routinely identifying good, but not perfect, Petri net models. Ideas for improving the algorithm for this high-dimensional problem are presented.
Computer use in primary care practices in Canada.
Anisimowicz, Yvonne; Bowes, Andrea E; Thompson, Ashley E; Miedema, Baukje; Hogg, William E; Wong, Sabrina T; Katz, Alan; Burge, Fred; Aubrey-Bassler, Kris; Yelland, Gregory S; Wodchis, Walter P
2017-05-01
To examine the use of computers in primary care practices. The international Quality and Cost of Primary Care study was conducted in Canada in 2013 and 2014 using a descriptive cross-sectional survey method to collect data from practices across Canada. Participating practices filled out several surveys, one of them being the Family Physician Survey, from which this study collected its data. All 10 Canadian provinces. A total of 788 family physicians. A computer use scale measured the extent to which family physicians integrated computers into their practices, with higher scores indicating a greater integration of computer use in practice. Analyses included t tests and 2 tests comparing new and traditional models of primary care on measures of computer use and electronic health record (EHR) use, as well as descriptive statistics. Nearly all (97.5%) physicians reported using a computer in their practices, with moderately high computer use scale scores (mean [SD] score of 5.97 [2.96] out of 9), and many (65.7%) reported using EHRs. Physicians with practices operating under new models of primary care reported incorporating computers into their practices to a greater extent (mean [SD] score of 6.55 [2.64]) than physicians operating under traditional models did (mean [SD] score of 5.33 [3.15]; t 726.60 = 5.84; P < .001; Cohen d = 0.42, 95% CI 0.808 to 1.627) and were more likely to report using EHRs (73.8% vs 56.7%; [Formula: see text]; P < .001; odds ratio = 2.15). Overall, there was a statistically significant variability in computer use across provinces. Most family physicians in Canada have incorporated computers into their practices for administrative and scholarly activities; however, EHRs have not been adopted consistently across the country. Physicians with practices operating under the new, more collaborative models of primary care use computers more comprehensively and are more likely to use EHRs than those in practices operating under traditional models of primary care. Copyright© the College of Family Physicians of Canada.
Rudd, Michael E.
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253
Rudd, Michael E
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
ERIC Educational Resources Information Center
Marcovitz, Alan B., Ed.
A particularly difficult area for many engineering students is the approximate nature of the relation between models and physical systems. This is notably true when the models consist of differential equations. An approach applied to this problem has been to use analog computers to assist in portraying the output of a model as it is progressively…
Jason Forthofer; Bret Butler
2007-01-01
A computational fluid dynamics (CFD) model and a mass-consistent model were used to simulate winds on simulated fire spread over a simple, low hill. The results suggest that the CFD wind field could significantly change simulated fire spread compared to traditional uniform winds. The CFD fire spread case may match reality better because the winds used in the fire...
Modeling compressible multiphase flows with dispersed particles in both dense and dilute regimes
NASA Astrophysics Data System (ADS)
McGrath, T.; St. Clair, J.; Balachandar, S.
2018-05-01
Many important explosives and energetics applications involve multiphase formulations employing dispersed particles. While considerable progress has been made toward developing mathematical models and computational methodologies for these flows, significant challenges remain. In this work, we apply a mathematical model for compressible multiphase flows with dispersed particles to existing shock and explosive dispersal problems from the literature. The model is cast in an Eulerian framework, treats all phases as compressible, is hyperbolic, and satisfies the second law of thermodynamics. It directly applies the continuous-phase pressure gradient as a forcing function for particle acceleration and thereby retains relaxed characteristics for the dispersed particle phase that remove the constituent material sound velocity from the eigenvalues. This is consistent with the expected characteristics of dispersed particle phases and can significantly improve the stable time-step size for explicit methods. The model is applied to test cases involving the shock and explosive dispersal of solid particles and compared to data from the literature. Computed results compare well with experimental measurements, providing confidence in the model and computational methods applied.
Toward a computational model of hemostasis
NASA Astrophysics Data System (ADS)
Leiderman, Karin; Danes, Nicholas; Schoeman, Rogier; Neeves, Keith
2017-11-01
Hemostasis is the process by which a blood clot forms to prevent bleeding at a site of injury. The formation time, size and structure of a clot depends on the local hemodynamics and the nature of the injury. Our group has previously developed computational models to study intravascular clot formation, a process confined to the interior of a single vessel. Here we present the first stage of an experimentally-validated, computational model of extravascular clot formation (hemostasis) in which blood through a single vessel initially escapes through a hole in the vessel wall and out a separate injury channel. This stage of the model consists of a system of partial differential equations that describe platelet aggregation and hemodynamics, solved via the finite element method. We also present results from the analogous, in vitro, microfluidic model. In both models, formation of a blood clot occludes the injury channel and stops flow from escaping while blood in the main vessel retains its fluidity. We discuss the different biochemical and hemodynamic effects on clot formation using distinct geometries representing intra- and extravascular injuries.
Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Busemeyer, Brian; Wagner, Lucas K.
While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
NASA Astrophysics Data System (ADS)
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
Huayamave, Victor; Rose, Christopher; Serra, Sheila; Jones, Brendan; Divo, Eduardo; Moslehy, Faissal; Kassab, Alain J; Price, Charles T
2015-07-16
A physics-based computational model of neonatal Developmental Dysplasia of the Hip (DDH) following treatment with the Pavlik Harness (PV) was developed to obtain muscle force contribution in order to elucidate biomechanical factors influencing the reduction of dislocated hips. Clinical observation suggests that reduction occurs in deep sleep involving passive muscle action. Consequently, a set of five (5) adductor muscles were identified as mediators of reduction using the PV. A Fung/Hill-type model was used to characterize muscle response. Four grades (1-4) of dislocation were considered, with one (1) being a low subluxation and four (4) a severe dislocation. A three-dimensional model of the pelvis-femur lower limb of a representative 10 week-old female was generated based on CT-scans with the aid of anthropomorphic scaling of anatomical landmarks. The model was calibrated to achieve equilibrium at 90° flexion and 80° abduction. The hip was computationally dislocated according to the grade under investigation, the femur was restrained to move in an envelope consistent with PV restraints, and the dynamic response under passive muscle action and the effect of gravity was resolved. Model results with an anteversion angle of 50° show successful reduction Grades 1-3, while Grade 4 failed to reduce with the PV. These results are consistent with a previous study based on a simplified anatomically-consistent synthetic model and clinical reports of very low success of the PV for Grade 4. However our model indicated that it is possible to achieve reduction of Grade 4 dislocation by hyperflexion and the resultant external rotation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computation of Sound Generated by Viscous Flow Over a Circular Cylinder
NASA Technical Reports Server (NTRS)
Cox, Jared S.; Rumsey, Christopher L.; Brentner, Kenneth S.; Younis, Bassam A.
1997-01-01
The Lighthill acoustic analogy approach combined with Reynolds-averaged Navier Stokes is used to predict the sound generated by unsteady viscous flow past a circular cylinder assuming a correlation length of 10 cylinder diameters. The two-dimensional unsteady flow field is computed using two Navier-Stokes codes at a low Mach number over a range of Reynolds numbers from 100 to 5 million. Both laminar flow as well as turbulent flow with a variety of eddy viscosity turbulence models are employed. Mean drag and Strouhal number are examined, and trends similar to experiments are observed. Computing the noise within the Reynolds number regime where transition to turbulence occurs near the separation point is problematic: laminar flow exhibits chaotic behavior and turbulent flow exhibits strong dependence on the turbulence model employed. Comparisons of far-field noise with experiment at a Reynolds number of 90,000, therefore, vary significantly, depending on the turbulence model. At a high Reynolds number outside this regime, three different turbulence models yield self-consistent results.
Computation of Vortex Shedding and Radiated Sound for a Circular Cylinder
NASA Technical Reports Server (NTRS)
Cox, Jared S.; Brentner, Kenneth S.; Rumsey, Christopher L.; Younis, Bassam A.
1997-01-01
The Lighthill acoustic analogy approach combined with Reynolds-averaged Navier Stokes is used to predict the sound generated by unsteady viscous flow past a circular cylinder assuming a correlation length of ten cylinder diameters. The two- dimensional unsteady ow field is computed using two Navier-Stokes codes at a low Mach number over a range of Reynolds numbers from 100 to 5 million. Both laminar ow as well as turbulent ow with a variety of eddy viscosity turbulence models are employed. Mean drag and Strouhal number are examined, and trends similar to experiments are observed. Computing the noise within the Reynolds number regime where transition to turbulence occurs near the separation point is problematic: laminar flow exhibits chaotic behavior and turbulent ow exhibits strong dependence on the turbulence model employed. Comparisons of far-field noise with experiment at a Reynolds number of 90,000, therefore, vary significantly, depending on the turbulence model. At a high Reynolds number outside this regime, three different turbulence models yield self-consistent results.
Design of Bioprosthetic Aortic Valves using biaxial test data.
Dabiri, Y; Paulson, K; Tyberg, J; Ronsky, J; Ali, I; Di Martino, E; Narine, K
2015-01-01
Bioprosthetic Aortic Valves (BAVs) do not have the serious limitations of mechanical aortic valves in terms of thrombosis. However, the lifetime of BAVs is too short, often requiring repeated surgeries. The lifetime of BAVs might be improved by using computer simulations of the structural behavior of the leaflets. The goal of this study was to develop a numerical model applicable to the optimization of durability of BAVs. The constitutive equations were derived using biaxial tensile tests. Using a Fung model, stress and strain data were computed from biaxial test data. SolidWorks was used to develop the geometry of the leaflets, and ABAQUS finite element software package was used for finite element calculations. Results showed the model is consistent with experimental observations. Reaction forces computed by the model corresponded with experimental measurements when the biaxial test was simulated. As well, the location of maximum stresses corresponded to the locations of frequent tearing of BAV leaflets. Results suggest that BAV design can be optimized with respect to durability.
NASA Technical Reports Server (NTRS)
Louis, P.; Gokhale, A. M.
1996-01-01
Computer simulation is a powerful tool for analyzing the geometry of three-dimensional microstructure. A computer simulation model is developed to represent the three-dimensional microstructure of a two-phase particulate composite where particles may be in contact with one another but do not overlap significantly. The model is used to quantify the "connectedness" of the particulate phase of a polymer matrix composite containing hollow carbon particles in a dielectric polymer resin matrix. The simulations are utilized to estimate the morphological percolation volume fraction for electrical conduction, and the effective volume fraction of the particles that actually take part in the electrical conduction. The calculated values of the effective volume fraction are used as an input for a self-consistent physical model for electrical conductivity. The predicted values of electrical conductivity are in very good agreement with the corresponding experimental data on a series of specimens having different particulate volume fraction.
Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions
NASA Astrophysics Data System (ADS)
Oprisan, Sorinel Adrian; Oprisan, Ana
2005-03-01
Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells — EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.
State estimation improves prospects for ocean research
NASA Astrophysics Data System (ADS)
Stammer, Detlef; Wunsch, C.; Fukumori, I.; Marshall, J.
Rigorous global ocean state estimation methods can now be used to produce dynamically consistent time-varying model/data syntheses, the results of which are being used to study a variety of important scientific problems. Figure 1 shows a schematic of a complete ocean observing and synthesis system that includes global observations and state-of-the-art ocean general circulation models (OGCM) run on modern computer platforms. A global observing system is described in detail in Smith and Koblinsky [2001],and the present status of ocean modeling and anticipated improvements are addressed by Griffies et al. [2001]. Here, the focus is on the third component of state estimation: the synthesis of the observations and a model into a unified, dynamically consistent estimate.
Nava, Michele M; Raimondi, Manuela T; Pietrabissa, Riccardo
2013-11-01
The main challenge in engineered cartilage consists in understanding and controlling the growth process towards a functional tissue. Mathematical and computational modelling can help in the optimal design of the bioreactor configuration and in a quantitative understanding of important culture parameters. In this work, we present a multiphysics computational model for the prediction of cartilage tissue growth in an interstitial perfusion bioreactor. The model consists of two separate sub-models, one two-dimensional (2D) sub-model and one three-dimensional (3D) sub-model, which are coupled between each other. These sub-models account both for the hydrodynamic microenvironment imposed by the bioreactor, using a model based on the Navier-Stokes equation, the mass transport equation and the biomass growth. The biomass, assumed as a phase comprising cells and the synthesised extracellular matrix, has been modelled by using a moving boundary approach. In particular, the boundary at the fluid-biomass interface is moving with a velocity depending from the local oxygen concentration and viscous stress. In this work, we show that all parameters predicted, such as oxygen concentration and wall shear stress, by the 2D sub-model with respect to the ones predicted by the 3D sub-model are systematically overestimated and thus the tissue growth, which directly depends on these parameters. This implies that further predictive models for tissue growth should take into account of the three dimensionality of the problem for any scaffold microarchitecture.
One-dimensional hybrid model of plasma-solid interaction in argon plasma at higher pressures
NASA Astrophysics Data System (ADS)
Jelínek, P.; Hrach, R.
2007-04-01
One of problems important in the present plasma science is the surface treatment of materials at higher pressures, including the atmospheric pressure plasma. The theoretical analysis of processes in such plasmas is difficult, because the theories derived for collisionless or slightly collisional plasma lose their validity at medium and high pressures, therefore the methods of computational physics are being widely used. There are two basic ways, how to model the physical processes taking place during the interaction of plasma with immersed solids. The first technique is the particle approach, the second one is called the fluid modelling. Both these approaches have their limitations-small efficiency of particle modelling and limited accuracy of fluid models. In computer modelling is endeavoured to use advantages by combination of these two approaches, this combination is named hybrid modelling. In our work one-dimensional hybrid model of plasma-solid interaction has been developed for an electropositive plasma at higher pressures. We have used hybrid model for this problem only as the test for our next applications, e.g. pulsed discharge, RF discharge, etc. The hybrid model consists of a combined molecular dynamics-Monte Carlo model for fast electrons and fluid model for slow electrons and positive argon ions. The latter model also contains Poisson's equation, to obtain a self-consistent electric field distribution. The derived results include the spatial distributions of electric potential, concentrations and fluxes of individual charged species near the substrate for various pressures and for various probe voltage bias.
Activation of the marine ecosystem model 3D CEMBS for the Baltic Sea in operational mode
NASA Astrophysics Data System (ADS)
Dzierzbicka-Glowacka, Lidia; Jakacki, Jaromir; Janecki, Maciej; Nowicki, Artur
2013-04-01
The paper presents a new marine ecosystem model 3D CEMBS designed for the Baltic Sea. The ecosystem model is incorporated into the 3D POPCICE ocean-ice model. The Current Baltic Sea model is based on the Community Earth System Model (CESM from the National Center for Atmospheric Research) which was adapted for the Baltic Sea as a coupled sea-ice model. It consists of the Community Ice Code (CICE model, version 4.0) and the Parallel Ocean Program (version 2.1). The ecosystem model is a biological submodel of the 3D CEMBS. It consists of eleven mass conservation equations. There are eleven partial second-order differential equations of the diffusion type with the advective term for phytoplankton, zooplankton, nutrients, dissolved oxygen, and dissolved and particulate organic matter. This model is an effective tool for solving the problem of ecosystem bioproductivity. The model is forced by 48-hour atmospheric forecasts provided by the UM model from the Interdisciplinary Centre for Mathematical and Computational Modelling of Warsaw University (ICM). The study was financially supported by the Polish State Committee of Scientific Research (grants: No N N305 111636, N N306 353239). The partial support for this study was also provided by the project Satellite Monitoring of the Baltic Sea Environment - SatBaltyk founded by European Union through European Regional Development Fund contract no. POIG 01.01.02-22-011/09. Calculations were carried out at the Academy Computer Centre in Gdańsk.
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model
NASA Astrophysics Data System (ADS)
Hazan, Aurélien
2017-05-01
We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.
NASA Astrophysics Data System (ADS)
Panda, Rudrashish; Sahu, Sivabrata; Rout, G. C.
2017-05-01
We communicate here a tight binding theoretical model study of the band filling effect on the charge gap in graphene-on-substrate. The Hamiltonian consists of nearest neighbor electron hopping and substrate induced gap. Besides this the Coulomb interaction is considered here within mean-field approximation in the paramagnetic limit. The electron occupancies at two sublattices are calculated by Green's function technique and are solved self consistently. Finally the charge gap i.e. Δ ¯=U [ < na > -< nb > ] is calculated and computed numerically. The results are reported.
Quantum computers: Definition and implementations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Delgado, Carlos A.; Kok, Pieter
The DiVincenzo criteria for implementing a quantum computer have been seminal in focusing both experimental and theoretical research in quantum-information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. Therefore, the question is what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that, according to this definition, a device is a quantum computer if it obeys the following criteria:more » Any quantum computer must consist of a quantum memory, with an additional structure that (1) facilitates a controlled quantum evolution of the quantum memory; (2) includes a method for information theoretic cooling of the memory; and (3) provides a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault tolerantly. We discuss various existing quantum computing paradigms and how they fit within this framework. Finally, we present a decision tree for selecting an avenue toward building a quantum computer. This is intended to help experimentalists determine the most natural paradigm given a particular physical implementation.« less
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
Dynamics of pulsatile flow in fractal models of vascular branching networks.
Bui, Anh; Sutalo, Ilija D; Manasseh, Richard; Liffman, Kurt
2009-07-01
Efficient regulation of blood flow is critically important to the normal function of many organs, especially the brain. To investigate the circulation of blood in complex, multi-branching vascular networks, a computer model consisting of a virtual fractal model of the vasculature and a mathematical model describing the transport of blood has been developed. Although limited by some constraints, in particular, the use of simplistic, uniformly distributed model for cerebral vasculature and the omission of anastomosis, the proposed computer model was found to provide insights into blood circulation in the cerebral vascular branching network plus the physiological and pathological factors which may affect its functionality. The numerical study conducted on a model of the middle cerebral artery region signified the important effects of vessel compliance, blood viscosity variation as a function of the blood hematocrit, and flow velocity profile on the distributions of flow and pressure in the vascular network.
Operations research investigations of satellite power stations
NASA Technical Reports Server (NTRS)
Cole, J. W.; Ballard, J. L.
1976-01-01
A systems model reflecting the design concepts of Satellite Power Stations (SPS) was developed. The model is of sufficient scope to include the interrelationships of the following major design parameters: the transportation to and between orbits; assembly of the SPS; and maintenance of the SPS. The systems model is composed of a set of equations that are nonlinear with respect to the system parameters and decision variables. The model determines a figure of merit from which alternative concepts concerning transportation, assembly, and maintenance of satellite power stations are studied. A hybrid optimization model was developed to optimize the system's decision variables. The optimization model consists of a random search procedure and the optimal-steepest descent method. A FORTRAN computer program was developed to enable the user to optimize nonlinear functions using the model. Specifically, the computer program was used to optimize Satellite Power Station system components.
CFD Evaluation of a 3rd Generation LDI Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Mongia, Hukam; Lee, Phil
2017-01-01
An effort was undertaken to perform CFD analysis of fluid flow in Lean-Direct Injection (LDI) combustors with axial swirl-venturi elements for next-generation LDI-3 combustor design. The National Combustion Code (NCC) was used to perform non-reacting and two-phase reacting flow computations for a nineteen-element injector array arranged in a three-module, 7-5-7 element configuration. All computations were performed with a consistent approach of mesh-optimization, spray-modeling, ignition and kinetics-modeling with the NCC. Computational predictions of the aerodynamics of the injector were used to arrive at an optimal injector design that meets effective area and fuel-air mixing criteria. LDI-3 emissions (EINOx, EICO and UHC) were compared with the previous generation LDI-2 combustor experimental data at representative engine cycle conditions.
NASA Astrophysics Data System (ADS)
Pang, ChunLi; Cao, TianGuang; Li, JunWei; Jia, MengWen; Zhang, SuHua; Ren, ShuXi; An, HaiLong; Zhan, Yong
2013-08-01
The family of calcium-binding proteins (CaBPs) consists of dozens of members and contributes to all aspects of the cell's function, from homeostasis to learning and memory. However, the Ca2+-binding mechanism is still unclear for most of CaBPs. To identify the Ca2+-binding sites of CaBPs, this study presented a computational approach which combined the fragment homology modeling with molecular dynamics simulation. For validation, we performed a two-step strategy as follows: first, the approach is used to identify the Ca2+-binding sites of CaBPs, which have the EF-hand Ca2+-binding site and the detailed binding mechanism. To accomplish this, eighteen crystal structures of CaBPs with 49 Ca2+-binding sites are selected to be analyzed including calmodulin. The computational method identified 43 from 49 Ca2+-binding sites. Second, we performed the approach to large-conductance Ca2+-activated K+ (BK) channels which don't have clear Ca2+-binding mechanism. The simulated results are consistent with the experimental data. The computational approach may shed some light on the identification of Ca2+-binding sites in CaBPs.
Zero side force volute development
NASA Technical Reports Server (NTRS)
Anderson, P. G.; Franz, R. J.; Farmer, R. C.; Chen, Y. S.
1995-01-01
Collector scrolls on high performance centrifugal pumps are currently designed with methods which are based on very approximate flowfield models. Such design practices result in some volute configurations causing excessive side loads even at design flowrates. The purpose of this study was to develop and verify computational design tools which may be used to optimize volute configurations with respect to avoiding excessive loads on the bearings. The new design methodology consisted of a volute grid generation module and a computational fluid dynamics (CFD) module to describe the volute geometry and predict the radial forces for a given flow condition, respectively. Initially, the CFD module was used to predict the impeller and the volute flowfields simultaneously; however, the required computation time was found to be excessive for parametric design studies. A second computational procedure was developed which utilized an analytical impeller flowfield model and an ordinary differential equation to describe the impeller/volute coupling obtained from the literature, Adkins & Brennen (1988). The second procedure resulted in 20 to 30 fold increase in computational speed for an analysis. The volute design analysis was validated by postulating a volute geometry, constructing a volute to this configuration, and measuring the steady radial forces over a range of flow coefficients. Excellent agreement between model predictions and observed pump operation prove the computational impeller/volute pump model to be a valuable design tool. Further applications are recommended to fully establish the benefits of this new methodology.
Hayashi, Yuichiro; Ishii, Shin; Urakubo, Hidetoshi
2014-01-01
Human observers perceive illusory rotations after the disappearance of circularly repeating patches containing dark-to-light luminance. This afterimage rotation is a very powerful phenomenon, but little is known about the mechanisms underlying it. Here, we use a computational model to show that the afterimage rotation can be explained by a combination of fast light adaptation and the physiological architecture of the early visual system, consisting of ON- and OFF-type visual pathways. In this retinal ON/OFF model, the afterimage rotation appeared as a rotation of focus lines of retinal ON/OFF responses. Focus lines rotated clockwise on a light background, but counterclockwise on a dark background. These findings were consistent with the results of psychophysical experiments, which were also performed by us. Additionally, the velocity of the afterimage rotation was comparable with that observed in our psychophysical experiments. These results suggest that the early visual system (including the retina) is responsible for the generation of the afterimage rotation, and that this illusory rotation may be systematically misinterpreted by our high-level visual system. PMID:25517906
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roper, V.P.; Kobayashi, R.
1988-02-01
Infinite-dilution fugacity coefficients were obtained for the system fluorene/phenanthrene at thirteen temperatures by fitting total pressure across the entire mole fraction range by a computer routine. A thermodynamically consistent routine, that allowed for both positive and negative pressure deviations from the ideal values, was used to correlate data over the full mole fraction range from 0 to 1. The four-suffix Margules activity coefficient model without modification essentially served this purpose since total pressures and total pressure derivatives with respect to mole fraction were negligible compared to pressure measurement precision. The water/ethanol system and binary systems comprised of aniline, chlorobenzene, acetonitrilemore » and other polar compounds were fit for total pressure across the entire mole fraction range for binary Vapor-Liquid-Equilbria (VLE) using the rigorous, thermodynamically consistent Gibbs-Duhem Relation derived by Ibl and Dodge. Data correlation was performed using a computer least squares procedure. Infinite-dilution fugacity coefficients were obtained using a modified Margules activity coefficient model.« less
Deep learning for computational chemistry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav
The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less
A computer program for anisotropic shallow-shell finite elements using symbolic integration
NASA Technical Reports Server (NTRS)
Andersen, C. M.; Bowen, J. T.
1976-01-01
A FORTRAN computer program for anisotropic shallow-shell finite elements with variable curvature is described. A listing of the program is presented together with printed output for a sample case. Computation times and central memory requirements are given for several different elements. The program is based on a stiffness (displacement) finite-element model in which the fundamental unknowns consist of both the displacement and the rotation components of the reference surface of the shell. Two triangular and four quadrilateral elements are implemented in the program. The triangular elements have 6 or 10 nodes, and the quadrilateral elements have 4 or 8 nodes. Two of the quadrilateral elements have internal degrees of freedom associated with displacement modes which vanish along the edges of the elements (bubble modes). The triangular elements and the remaining two quadrilateral elements do not have bubble modes. The output from the program consists of arrays corresponding to the stiffness, the geometric stiffness, the consistent mass, and the consistent load matrices for individual elements. The integrals required for the generation of these arrays are evaluated by using symbolic (or analytic) integration in conjunction with certain group-theoretic techniques. The analytic expressions for the integrals are exact and were developed using the symbolic and algebraic manipulation language.
Stepwise construction of a metabolic network in Event-B: The heat shock response.
Sanwal, Usman; Petre, Luigia; Petre, Ion
2017-12-01
There is a high interest in constructing large, detailed computational models for biological processes. This is often done by putting together existing submodels and adding to them extra details/knowledge. The result of such approaches is usually a model that can only answer questions on a very specific level of detail, and thus, ultimately, is of limited use. We focus instead on an approach to systematically add details to a model, with formal verification of its consistency at each step. In this way, one obtains a set of reusable models, at different levels of abstraction, to be used for different purposes depending on the question to address. We demonstrate this approach using Event-B, a computational framework introduced to develop formal specifications of distributed software systems. We first describe how to model generic metabolic networks in Event-B. Then, we apply this method for modeling the biological heat shock response in eukaryotic cells, using Event-B refinement techniques. The advantage of using Event-B consists in having refinement as an intrinsic feature; this provides as a final result not only a correct model, but a chain of models automatically linked by refinement, each of which is provably correct and reusable. This is a proof-of-concept that refinement in Event-B is suitable for biomodeling, serving for mastering biological complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Efendioglu, Akin
2012-01-01
The main purpose of this study is to design a "Courseware Development Model" (CDM) and investigate its effects on pre-service teachers' academic achievements in the field of geography and attitudes toward computer-based education (ATCBE). The CDM consisted of three components: content (C), learning theory, namely, meaningful learning (ML), and…
ERIC Educational Resources Information Center
Gül, Seray Olçay
2016-01-01
There are many studies in the literature in which individuals with intellectual disabilities exhibit social skills deficits and which show the need for teaching these skills systematically. This study aims to investigate the effects of an intervention package of consisting computer-presented video modeling and Social Stories on individuals with…
Cosmic background radiation anisotropy in an open inflation, cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Kamionkowski, Marc; Ratra, Bharat; Spergel, David N.; Sugiyama, Naoshi
1994-01-01
We compute the cosmic background radiation anisotropy, produced by energy-density fluctuations generated during an early epoch of inflation, in an open cosmological model based on the cold dark matter scenario. At Omega(sub 0) is approximately 0.3-0.4, the Cosmic Background Explorer (COBE) normalized open model appears to be consistent with most observations.
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
NASA Astrophysics Data System (ADS)
Pallant, Amy; Lee, Hee-Sun
2015-04-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students ( N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation tasks with three increasingly complex dynamic climate models. Each scientific argumentation task consisted of four parts: multiple-choice claim, openended explanation, five-point Likert scale uncertainty rating, and open-ended uncertainty rationale. We coded 1,294 scientific arguments in terms of a claim's consistency with current scientific consensus, whether explanations were model based or knowledge based and categorized the sources of uncertainty (personal vs. scientific). We used chi-square and ANOVA tests to identify significant patterns. Results indicate that (1) a majority of students incorporated models as evidence to support their claims, (2) most students used model output results shown on graphs to confirm their claim rather than to explain simulated molecular processes, (3) students' dependence on model results and their uncertainty rating diminished as the dynamic climate models became more and more complex, (4) some students' misconceptions interfered with observing and interpreting model results or simulated processes, and (5) students' uncertainty sources reflected more frequently on their assessment of personal knowledge or abilities related to the tasks than on their critical examination of scientific evidence resulting from models. These findings have implications for teaching and research related to the integration of scientific argumentation and modeling practices to address complex Earth systems.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
MODIS Solar Diffuser: Modelled and Actual Performance
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Xiong, Xiao-Xiong; Esposito, Joe; Wang, Xin-Dong; Krebs, Carolyn (Technical Monitor)
2001-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument's solar diffuser is used in its radiometric calibration for the reflective solar bands (VIS, NTR, and SWIR) ranging from 0.41 to 2.1 micron. The sun illuminates the solar diffuser either directly or through a attenuation screen. The attenuation screen consists of a regular array of pin holes. The attenuated illumination pattern on the solar diffuser is not uniform, but consists of a multitude of pin-hole images of the sun. This non-uniform illumination produces small, but noticeable radiometric effects. A description of the computer model used to simulate the effects of the attenuation screen is given and the predictions of the model are compared with actual, on-orbit, calibration measurements.
RANS computations of tip vortex cavitation
NASA Astrophysics Data System (ADS)
Decaix, Jean; Balarac, Guillaume; Dreyer, Matthieu; Farhat, Mohamed; Münch, Cécile
2015-12-01
The present study is related to the development of the tip vortex cavitation in Kaplan turbines. The investigation is carried out on a simplified test case consisting of a NACA0009 blade with a gap between the blade tip and the side wall. Computations with and without cavitation are performed using a R ANS modelling and a transport equation for the liquid volume fraction. Compared with experimental data, the R ANS computations turn out to be able to capture accurately the development of the tip vortex. The simulations have also highlighted the influence of cavitation on the tip vortex trajectory.
NASA Astrophysics Data System (ADS)
Wobus, C.; Tucker, G.; Anderson, R.; Kean, J.; Small, E.; Hancock, G.
2007-12-01
The cross-sectional form of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate changes in channel cross-sectional geometry through time. We have developed a 2D numerical model that computes the formation of a channel in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Boundary shear stress is calculated using a simple approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local boundary surface. The resulting model predictions for the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with the predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ~3%, and the predicted peak shear stress is consistent to within ~7%. The efficiency of our model makes it suitable for calculations of long-term morphologic change both in single cross-sections and in series of cross-sections arrayed downstream. For a uniform substrate, the model predicts a strong tendency toward a fixed width-to-depth ratio, regardless of gradient or discharge. The model predicts power-law relationships between width and discharge with an exponent near 2/5, and between width and gradient with an exponent near -1/5. Recent enhancements to the model include the addition of sediment, which increases the width-to-depth ratio at steady state by favoring erosion of the channel walls relative to the channel bed (the "cover effect"). Inclusion of a probability density function of discharges with a simple parameterization of weathering along channel banks leads to the formation of model strath terraces. Downstream changes in substrate erodibility or tectonic uplift rate lead to step-function changes in channel width, consistent with empirical observations. Finally, explicit inclusion of bedload transport allows channel width, gradient, and the pattern of sediment flux to evolve dynamically, allowing us to explore the response of bedrock channels to both spatial patterns of rock uplift, and temporal variations in sediment input.
Khurelbaatar, Tsolmonbaatar; Kim, Kyungsoo; Hyuk Kim, Yoon
2015-11-01
Computational musculoskeletal models have been developed to predict mechanical joint loads on the human spine, such as the forces and moments applied to vertebral and facet joints and the forces that act on ligaments and muscles because of difficulties in the direct measurement of joint loads. However, many whole-spine models lack certain elements. For example, the detailed facet joints in the cervical region or the whole spine region may not be implemented. In this study, a detailed cervico-thoraco-lumbar multibody musculoskeletal model with all major ligaments, separated structures of facet contact and intervertebral disk joints, and the rib cage was developed. The model was validated by comparing the intersegmental rotations, ligament tensile forces, facet joint contact forces, compressive and shear forces on disks, and muscle forces were to those reported in previous experimental and computational studies both by region (cervical, thoracic, or lumbar regions) and for the whole model. The comparisons demonstrated that our whole spine model is consistent with in vitro and in vivo experimental studies and with computational studies. The model developed in this study can be used in further studies to better understand spine structures and injury mechanisms of spinal disorders.
NASA Astrophysics Data System (ADS)
Fecher, T.; Pail, R.; Gruber, T.
2017-05-01
GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.
MCAID--A Generalized Text Driver.
ERIC Educational Resources Information Center
Ahmed, K.; Dickinson, C. J.
MCAID is a relatively machine-independent technique for writing computer-aided instructional material consisting of descriptive text, multiple choice questions, and the ability to call compiled subroutines to perform extensive calculations. It was specially developed to incorporate test-authoring around complex mathematical models to explore a…
Quantitative Adverse Outcome Pathways and Their Application to Predictive Toxicology
A quantitative adverse outcome pathway (qAOP) consists of one or more biologically based, computational models describing key event relationships linking a molecular initiating event (MIE) to an adverse outcome. A qAOP provides quantitative, dose–response, and time-course p...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Cheng-Hsien; Department of Water Resources and Environmental Engineering, Tamkang University, New Taipei City 25137, Taiwan; Low, Ying Min, E-mail: ceelowym@nus.edu.sg
2016-05-15
Sediment transport is fundamentally a two-phase phenomenon involving fluid and sediments; however, many existing numerical models are one-phase approaches, which are unable to capture the complex fluid-particle and inter-particle interactions. In the last decade, two-phase models have gained traction; however, there are still many limitations in these models. For example, several existing two-phase models are confined to one-dimensional problems; in addition, the existing two-dimensional models simulate only the region outside the sand bed. This paper develops a new three-dimensional two-phase model for simulating sediment transport in the sheet flow condition, incorporating recently published rheological characteristics of sediments. The enduring-contact, inertial,more » and fluid viscosity effects are considered in determining sediment pressure and stresses, enabling the model to be applicable to a wide range of particle Reynolds number. A k − ε turbulence model is adopted to compute the Reynolds stresses. In addition, a novel numerical scheme is proposed, thus avoiding numerical instability caused by high sediment concentration and allowing the sediment dynamics to be computed both within and outside the sand bed. The present model is applied to two classical problems, namely, sheet flow and scour under a pipeline with favorable results. For sheet flow, the computed velocity is consistent with measured data reported in the literature. For pipeline scour, the computed scour rate beneath the pipeline agrees with previous experimental observations. However, the present model is unable to capture vortex shedding; consequently, the sediment deposition behind the pipeline is overestimated. Sensitivity analyses reveal that model parameters associated with turbulence have strong influence on the computed results.« less
NASA Astrophysics Data System (ADS)
Erdt, Marius; Sakas, Georgios
2010-03-01
This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.
Modeling macro-and microstructures of gas-metal-arc welded HSLA-100 steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Debroy, T.
1999-06-01
Fluid flow and heat transfer during gas-metal-arc welding (GMAW) of HSLA-100 steel were studied using a transient, three-dimensional, turbulent heat transfer and fluid flow model. The temperature and velocity fields, cooling rates, and shape and size of the fusion and heat-affected zones (HAZs) were calculated. A continuous-cooling-transformation (CCT) diagram was computed to aid in the understanding of the observed weld metal microstructure. The computed results demonstrate that the dissipation of heat and momentum in the weld pool is significantly aided by turbulence,m thus suggesting that previous modeling results based on laminar flow need to be re-examined. A comparison of themore » calculated fusion and HAZ geometries with their corresponding measured values showed good agreement. Furthermore, finger penetration, a unique geometric characteristic of gas-metal-arc weld pools, could be satisfactorily predicted from the model. The ability to predict these geometric variables and the agreement between the calculated and the measured cooling rates indicate the appropriateness of using a turbulence model for accurate calculations. The microstructure of the weld metal consisted mainly of acicular ferrite with small amounts of bainite. At high heat inputs, small amounts of allotriomorphic and Widmanstaetten ferrite were also observed. The observed microstructures are consistent with those expected from the computed CCT diagram and the cooling rates. The results presented here demonstrate significant promise for understanding both macro-and microstructures of steel welds from the combination of the fundamental principles from both transport phenomena and phase transformation theory.« less
NASA Astrophysics Data System (ADS)
Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald
2017-04-01
With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.
Cosmic Strings Stabilized by Quantum Fluctuations
NASA Astrophysics Data System (ADS)
Weigel, H.
2017-03-01
Fermion quantum corrections to the energy of cosmic strings are computed. A number of rather technical tools are needed to formulate this correction, and isospin and gauge invariance are employed to verify consistency of these tools. These corrections must also be included when computing the energy of strings that are charged by populating fermion bound states in its background. It is found that charged strings are dynamically stabilized in theories similar to the standard model of particle physics.
Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing
2014-05-01
Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while
The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.
1980-10-09
the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running
Moore, Jason H; Boczko, Erik M; Summar, Marshall L
2005-02-01
Understanding how DNA sequence variations impact human health through a hierarchy of biochemical and physiological systems is expected to improve the diagnosis, prevention, and treatment of common, complex human diseases. We have previously developed a hierarchical dynamic systems approach based on Petri nets for generating biochemical network models that are consistent with genetic models of disease susceptibility. This modeling approach uses an evolutionary computation approach called grammatical evolution as a search strategy for optimal Petri net models. We have previously demonstrated that this approach routinely identifies biochemical network models that are consistent with a variety of genetic models in which disease susceptibility is determined by nonlinear interactions between two or more DNA sequence variations. We review here this approach and then discuss how it can be used to model biochemical and metabolic data in the context of genetic studies of human disease susceptibility.
The binding domain of the HMGB1 inhibitor carbenoxolone: Theory and experiment
NASA Astrophysics Data System (ADS)
Mollica, Luca; Curioni, Alessandro; Andreoni, Wanda; Bianchi, Marco E.; Musco, Giovanna
2008-05-01
We present a combined computational and experimental study of the interaction of the Box A of the HMGB1 protein and carbenoxolone, an inhibitor of its pro-inflammatory activity. The computational approach consists of classical molecular dynamics (MD) simulations based on the GROMOS force field with quantum-refined (QRFF) atomic charges for the ligand. Experimental data consist of fluorescence intensities, chemical shift displacements, saturation transfer differences and intermolecular Nuclear Overhauser Enhancement signals. Good agreement is found between observations and the conformation of the ligand-protein complex resulting from QRFF-MD. In contrast, simple docking procedures and MD based on the unrefined force field provide models inconsistent with experiment. The ligand-protein binding is dominated by non-directional interactions.
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Johansen, Craig T.; Ashcraft, Scott W.; Novak, Luke A.
2013-01-01
Numerical predictions of the Mars Science Laboratory reaction control system jets interacting with a Mach 10 hypersonic flow are compared to experimental nitric oxide planar laser-induced fluorescence data. The steady Reynolds Averaged Navier Stokes equations using the Baldwin-Barth one-equation turbulence model were solved using the OVERFLOW code. The experimental fluorescence data used for comparison consists of qualitative two-dimensional visualization images, qualitative reconstructed three-dimensional flow structures, and quantitative two-dimensional distributions of streamwise velocity. Through modeling of the fluorescence signal equation, computational flow images were produced and directly compared to the qualitative fluorescence data.
Takashima, S
2001-04-05
The large dipole moment of globular proteins has been well known because of the detailed studies using dielectric relaxation and electro-optical methods. The search for the origin of these dipolemoments, however, must be based on the detailed knowledge on protein structure with atomic resolutions. At present, we have two sources of information on the structure of protein molecules: (1) x-ray databases obtained in crystalline state; (2) NMR databases obtained in solution state. While x-ray databases consist of only one model, NMR databases, because of the fluctuation of the protein folding in solution, consist of a number of models, thus enabling the computation of dipole moment repeated for all these models. The aim of this work, using these databases, is the detailed investigation on the interdependence between the structure and dipole moment of protein molecules. The dipole moment of protein molecules has roughly two components: one dipole moment is due to surface charges and the other, core dipole moment, is due to polar groups such as N--H and C==O bonds. The computation of surface charge dipole moment consists of two steps: (A) calculation of the pK shifts of charged groups for electrostatic interactions and (B) calculation of the dipole moment using the pK corrected for electrostatic shifts. The dipole moments of several proteins were computed using both NMR and x-ray databases. The dipole moments of these two sets of calculations are, with a few exceptions, in good agreement with one another and also with measured dipole moments.
Overcoming complexities for consistent, continental-scale flood mapping
NASA Astrophysics Data System (ADS)
Smith, Helen; Zaidman, Maxine; Davison, Charlotte
2013-04-01
The EU Floods Directive requires all member states to produce flood hazard maps by 2013. Although flood mapping practices are well developed in Europe, there are huge variations in the scale and resolution of the maps between individual countries. Since extreme flood events are rarely confined to a single country, this is problematic, particularly for the re/insurance industry whose exposures often extend beyond country boundaries. Here, we discuss the challenges of large-scale hydrological and hydraulic modelling, using our experience of developing a 12-country model and set of maps, to illustrate how consistent, high-resolution river flood maps across Europe can be produced. The main challenges addressed include: data acquisition; manipulating the vast quantities of high-resolution data; and computational resources. Our starting point was to develop robust flood-frequency models that are suitable for estimating peak flows for a range of design flood return periods. We used the index flood approach, based on a statistical analysis of historic river flow data pooled on the basis of catchment characteristics. Historical flow data were therefore sourced for each country and collated into a large pan-European database. After a lengthy validation these data were collated into 21 separate analysis zones or regions, grouping smaller river basins according to their physical and climatic characteristics. The very large continental scale basins were each modelled separately on account of their size (e.g. Danube, Elbe, Drava and Rhine). Our methodology allows the design flood hydrograph to be predicted at any point on the river network for a range of return periods. Using JFlow+, JBA's proprietary 2D hydraulic hydrodynamic model, the calculated out-of-bank flows for all watercourses with an upstream drainage area exceeding 50km2 were routed across two different Digital Terrain Models in order to map the extent and depth of floodplain inundation. This generated modelling for a total river length of approximately 250,000km. Such a large-scale, high-resolution modelling exercise is extremely demanding on computational resources and would have been unfeasible without the use of Graphics Processing Units on a network of standard specification gaming computers. Our GPU grid is the world's largest flood-dedicated computer grid. The European river basins were split out into approximately 100 separate hydraulic models and managed individually, although care was taken to ensure flow continuity was maintained between models. The flood hazard maps from the modelling were pieced together using GIS techniques, to provide flood depth and extent information across Europe to a consistent scale and standard. After discussing the methodological challenges, we shall present our flood hazard maps and, from extensive validation work, compare these against historical flow records and observed flood extents.
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.; Leung, L. R.
2013-11-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into Earth system models (e.g., community land models (CLMs)), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module, Next Generation BioGeoChemical Module (NGBGC), version 1.0, with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter, and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into CLM. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems. The method presented here could in theory be applied to simulate biogeochemical cycles in other Earth system models.
Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions
NASA Technical Reports Server (NTRS)
Choo, Yung K. (Compiler)
1995-01-01
The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.
Duan, Xian-Chun; Wang, Yong-Zhong; Zhang, Jun-Ru; Luo, Huan; Zhang, Heng; Xia, Lun-Zhu
2011-08-01
To establish a dynamics model for extracting the lipophilic components in Panax notoginseng with supercritical carbon dioxide (CO2). Based on the theory of counter-flow mass transfer and the molecular mass transfer between the material and the supercritical CO2 fluid under differential mass-conservation equation, a dynamics model was established and computed to compare forecasting result with the experiment process. A dynamics model has been established for supercritical CO2 to extract the lipophilic components in Panax notoginseng, the computed result of this model was consistent with the experiment process basically. The supercritical fluid extract dynamics model established in this research can expound the mechanism in the extract process of which lipophilic components of Panax notoginseng dissolve the mass transfer and is tallied with the actual extract process. This provides certain instruction for the supercritical CO2 fluid extract' s industrialization enlargement.
Johnson, Christie
2016-01-01
This poster presentation presents a content modeling strategy using the SNOMED CT Observable Model to represent large amounts of detailed clinical data in a consistent and computable manner that can support multiple use cases. Lightweight Expression of Granular Objects (LEGOs) represent question/answer pairs on clinical data collection forms, where a question is modeled by a (usually) post-coordinated SNOMED CT expression. LEGOs transform electronic patient data into a normalized consumable, which means that the expressions can be treated as extensions of the SNOMED CT hierarchies for the purpose of performing subsumption queries and other analytics. Utilizing the LEGO approach for modeling clinical data obtained from a nursing admission assessment provides a foundation for data exchange across disparate information systems and software applications. Clinical data exchange of computable LEGO patient information enables the development of more refined data analytics, data storage and clinical decision support.
Garzón-Alvarado, Diego A
2013-01-21
This article develops a model of the appearance and location of the primary centers of ossification in the calvaria. The model uses a system of reaction-diffusion equations of two molecules (BMP and Noggin) whose behavior is of type activator-substrate and its solution produces Turing patterns, which represents the primary ossification centers. Additionally, the model includes the level of cell maturation as a function of the location of mesenchymal cells. Thus the mature cells can become osteoblasts due to the action of BMP2. Therefore, with this model, we can have two frontal primary centers, two parietal, and one, two or more occipital centers. The location of these centers in the simplified computational model is highly consistent with those centers found at an embryonic level. Copyright © 2012 Elsevier Ltd. All rights reserved.
Atomistic simulations of carbon diffusion and segregation in liquid silicon
NASA Astrophysics Data System (ADS)
Luo, Jinping; Alateeqi, Abdullah; Liu, Lijun; Sinno, Talid
2017-12-01
The diffusivity of carbon atoms in liquid silicon and their equilibrium distribution between the silicon melt and crystal phases are key, but unfortunately not precisely known parameters for the global models of silicon solidification processes. In this study, we apply a suite of molecular simulation tools, driven by multiple empirical potential models, to compute diffusion and segregation coefficients of carbon at the silicon melting temperature. We generally find good consistency across the potential model predictions, although some exceptions are identified and discussed. We also find good agreement with the range of available experimental measurements of segregation coefficients. However, the carbon diffusion coefficients we compute are significantly lower than the values typically assumed in continuum models of impurity distribution. Overall, we show that currently available empirical potential models may be useful, at least semi-quantitatively, for studying carbon (and possibly other impurity) transport in silicon solidification, especially if a multi-model approach is taken.
A Roadmap for the Development of Applied Computational Psychiatry.
Paulus, Martin P; Huys, Quentin J M; Maia, Tiago V
2016-09-01
Computational psychiatry is a burgeoning field that utilizes mathematical approaches to investigate psychiatric disorders, derive quantitative predictions, and integrate data across multiple levels of description. Computational psychiatry has already led to many new insights into the neurobehavioral mechanisms that underlie several psychiatric disorders, but its usefulness from a clinical standpoint is only now starting to be considered. Examples of computational psychiatry are highlighted, and a phase-based pipeline for the development of clinical computational-psychiatry applications is proposed, similar to the phase-based pipeline used in drug development. It is proposed that each phase has unique endpoints and deliverables, which will be important milestones to move tasks, procedures, computational models, and algorithms from the laboratory to clinical practice. Application of computational approaches should be tested on healthy volunteers in Phase I, transitioned to target populations in Phase IB and Phase IIA, and thoroughly evaluated using randomized clinical trials in Phase IIB and Phase III. Successful completion of these phases should be the basis of determining whether computational models are useful tools for prognosis, diagnosis, or treatment of psychiatric patients. A new type of infrastructure will be necessary to implement the proposed pipeline. This infrastructure should consist of groups of investigators with diverse backgrounds collaborating to make computational psychiatry relevant for the clinic.
Computational Modeling of Multi-Scale Material Features in Cement Paste - An Overview
2015-05-25
and concrete ; though commonly used are one of the most complex in terms of material morphology and structure than most materials, for example...across the multiple scales are required. In this paper, recent work from our research group on the nano to continuum level modeling of cementitious...of our research work consisting of, • Molecular Dynamics (MD) modeling for the nano scale features of the cementitious material chemistry. • Micro
Dynamic computer model for the metallogenesis and tectonics of the Circum-North Pacific
Scotese, Christopher R.; Nokleberg, Warren J.; Monger, James W.H.; Norton, Ian O.; Parfenov, Leonid M.; Khanchuk, Alexander I.; Bundtzen, Thomas K.; Dawson, Kenneth M.; Eremin, Roman A.; Frolov, Yuri F.; Fujita, Kazuya; Goryachev, Nikolai A.; Pozdeev, Anany I.; Ratkin, Vladimir V.; Rodinov, Sergey M.; Rozenblum, Ilya S.; Scholl, David W.; Shpikerman, Vladimir I.; Sidorov, Anatoly A.; Stone, David B.
2001-01-01
The digital files on this report consist of a dynamic computer model of the metallogenesis and tectonics of the Circum-North Pacific, and background articles, figures, and maps. The tectonic part of the dynamic computer model is derived from a major analysis of the tectonic evolution of the Circum-North Pacific which is also contained in directory tectevol. The dynamic computer model and associated materials on this CD-ROM are part of a project on the major mineral deposits, metallogenesis, and tectonics of the Russian Far East, Alaska, and the Canadian Cordillera. The project provides critical information on bedrock geology and geophysics, tectonics, major metalliferous mineral resources, metallogenic patterns, and crustal origin and evolution of mineralizing systems for this region. The major scientific goals and benefits of the project are to: (1) provide a comprehensive international data base on the mineral resources of the region that is the first, extensive knowledge available in English; (2) provide major new interpretations of the origin and crustal evolution of mineralizing systems and their host rocks, thereby enabling enhanced, broad-scale tectonic reconstructions and interpretations; and (3) promote trade and scientific and technical exchanges between North America and Eastern Asia.
Nivala, Michael; de Lange, Enno; Rovetti, Robert; Qu, Zhilin
2012-01-01
Intracellular calcium (Ca) cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR), mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU) network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a variant of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5 × 5 × 5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100 × 20 × 10 CRUs, a 1-s heart time simulation takes about 10 min of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown. PMID:22586402
NASA Technical Reports Server (NTRS)
Koeksal, Adnan; Trew, Robert J.; Kauffman, J. Frank
1992-01-01
A Moment Method Model for the radiation pattern characterization of single Linearly Tapered Slot Antennas (LTSA) in air or on a dielectric substrate is developed. This characterization consists of: (1) finding the radiated far-fields of the antenna; (2) determining the E-Plane and H-Plane beamwidths and sidelobe levels; and (3) determining the D-Plane beamwidth and cross polarization levels, as antenna parameters length, height, taper angle, substrate thickness, and the relative substrate permittivity vary. The LTSA geometry does not lend itself to analytical solution with the given parameter ranges. Therefore, a computer modeling scheme and a code are necessary to analyze the problem. This necessity imposes some further objectives or requirements on the solution method (modeling) and tool (computer code). These may be listed as follows: (1) a good approximation to the real antenna geometry; and (2) feasible computer storage and time requirements. According to these requirements, the work is concentrated on the development of efficient modeling schemes for these type of problems and on reducing the central processing unit (CPU) time required from the computer code. A Method of Moments (MoM) code is developed for the analysis of LTSA's within the parameter ranges given.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
NASA Astrophysics Data System (ADS)
Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho
2013-09-01
Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Orientation-selective aVLSI spiking neurons.
Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R
2001-01-01
We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
CFD Based Design of a Filming Injector for N+3 Combustors
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Mongia, Hukam; Lee, Phil
2016-01-01
An effort was undertaken to perform CFD analysis of fluid flow in Lean-Direct Injection (LDI) combustors with axial swirl-venturi elements for next-generation LDI-3 combustor design. The National Combustion Code (NCC) was used to perform non-reacting and two-phase reacting flow computations for a newly-designed pre-filming type fuel injector LDI-3 injector, in a single-injector and a five-injector array configuration. All computations were performed with a consistent approach of mesh-optimization, spray-modeling, ignition and kinetics-modeling. Computational predictions of the aerodynamics of the single-injector were used to arrive at an optimized main-injector design that meets effective area and fuel-air mixing criteria. Emissions (EINOx) characteristics were predicted for a medium-power engine cycle condition, and will be compared with data when it is made available from experimental measurements. The use of a PDF-like turbulence-chemistry interaction model with NCC's Time-Filtered Navier-Stokes (TFNS) solver is shown to produce a significant impact on the CFD results, when compared with a laminar-chemistry TFNS approach for the five-injector computations.
Automated analysis of biological oscillator models using mode decomposition.
Konopka, Tomasz
2011-04-01
Oscillating signals produced by biological systems have shapes, described by their Fourier spectra, that can potentially reveal the mechanisms that generate them. Extracting this information from measured signals is interesting for the validation of theoretical models, discovery and classification of interaction types, and for optimal experiment design. An automated workflow is described for the analysis of oscillating signals. A software package is developed to match signal shapes to hundreds of a priori viable model structures defined by a class of first-order differential equations. The package computes parameter values for each model by exploiting the mode decomposition of oscillating signals and formulating the matching problem in terms of systems of simultaneous polynomial equations. On the basis of the computed parameter values, the software returns a list of models consistent with the data. In validation tests with synthetic datasets, it not only shortlists those model structures used to generate the data but also shows that excellent fits can sometimes be achieved with alternative equations. The listing of all consistent equations is indicative of how further invalidation might be achieved with additional information. When applied to data from a microarray experiment on mice, the procedure finds several candidate model structures to describe interactions related to the circadian rhythm. This shows that experimental data on oscillators is indeed rich in information about gene regulation mechanisms. The software package is available at http://babylone.ulb.ac.be/autoosc/.
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less
Neural dynamics in reconfigurable silicon.
Basu, A; Ramakrishnan, S; Petre, C; Koziol, S; Brink, S; Hasler, P E
2010-10-01
A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm(2).
Versino, Daniele; Bronkhorst, Curt Allan
2018-01-31
The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less
NASA Technical Reports Server (NTRS)
Hawley, Suzanne L.; Fisher, George H.
1993-01-01
Solar flare model atmospheres computed under the assumption of energetic equilibrium in the chromosphere are presented. The models use a static, one-dimensional plane parallel geometry and are designed within a physically self-consistent coronal loop. Assumed flare heating mechanisms include collisions from a flux of non-thermal electrons and x-ray heating of the chromosphere by the corona. The heating by energetic electrons accounts explicitly for variations of the ionized fraction with depth in the atmosphere. X-ray heating of the chromosphere by the corona incorporates a flare loop geometry by approximating distant portions of the loop with a series of point sources, while treating the loop leg closest to the chromospheric footpoint in the plane-parallel approximation. Coronal flare heating leads to increased heat conduction, chromospheric evaporation and subsequent changes in coronal pressure; these effects are included self-consistently in the models. Cooling in the chromosphere is computed in detail for the important optically thick HI, CaII and MgII transitions using the non-LTE prescription in the program MULTI. Hydrogen ionization rates from x-ray photo-ionization and collisional ionization by non-thermal electrons are included explicitly in the rate equations. The models are computed in the 'impulsive' and 'equilibrium' limits, and in a set of intermediate 'evolving' states. The impulsive atmospheres have the density distribution frozen in pre-flare configuration, while the equilibrium models assume the entire atmosphere is in hydrostatic and energetic equilibrium. The evolving atmospheres represent intermediate stages where hydrostatic equilibrium has been established in the chromosphere and corona, but the corona is not yet in energetic equilibrium with the flare heating source. Thus, for example, chromospheric evaporation is still in the process of occurring.
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian
2009-01-01
Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420
A novel medical image data-based multi-physics simulation platform for computational life sciences.
Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels
2013-04-06
Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.
BioLab: Using Yeast Fermentation as a Model for the Scientific Method.
ERIC Educational Resources Information Center
Pigage, Helen K.; Neilson, Milton C.; Greeder, Michele M.
This document presents a science experiment demonstrating the scientific method. The experiment consists of testing the fermentation capabilities of yeasts under different circumstances. The experiment is supported with computer software called BioLab which demonstrates yeast's response to different environments. (YDS)
Numerical Modeling of Saturated Boiling in a Heated Tube
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Hartwig, Jason
2017-01-01
This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.
A symbiotic approach to fluid equations and non-linear flux-driven simulations of plasma dynamics
NASA Astrophysics Data System (ADS)
Halpern, Federico
2017-10-01
The fluid framework is ubiquitous in studies of plasma transport and stability. Typical forms of the fluid equations are motivated by analytical work dating several decades ago, before computer simulations were indispensable, and can be, therefore, not optimal for numerical computation. We demonstrate a new first-principles approach to obtaining manifestly consistent, skew-symmetric fluid models, ensuring internal consistency and conservation properties even in discrete form. Mass, kinetic, and internal energy become quadratic (and always positive) invariants of the system. The model lends itself to a robust, straightforward discretization scheme with inherent non-linear stability. A simpler, drift-ordered form of the equations is obtained, and first results of their numerical implementation as a binary framework for bulk-fluid global plasma simulations are demonstrated. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Theory Program, under Award No. DE-FG02-95ER54309.
Terada, Takatoshi; Tagami, Manabu; Ohtsubo, Toshiro; Iwao, Yasunori; Noguchi, Shuji; Itai, Shigeru
2016-07-25
In this report, a new solventless microencapsulation method by synthesizing polyurethane (PU) from polyol and isocyanate during an agglomeration process in a high-speed mixing apparatus was developed. Clothianidin (CTD), which is a neonicotinoid insecticide and highly effective against a wide variety of insect pests, was used as the model compound. The microencapsulated samples covered with PU (CTD microspheres) had a median diameter of <75μm and sustained-release properties. The CTD microspheres were analyzed by synchrotron X-ray computed tomography measurements. Multiple cores of CTD and other solid excipient were dispersed in PU. Although voids appeared in the CTD microspheres after CTD release, the spherical shape of the microspheres remained stable and no change in its framework was observed. The experimental release data were highly consistent with the Baker-Lonsdale model derived from drug release of spherical monolithic dispersions and consistent with the computed tomography measurements. Copyright © 2016 Elsevier B.V. All rights reserved.
An autonomous molecular computer for logical control of gene expression
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2013-01-01
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems1–7. Recently, simple molecular-scale autonomous programmable computers were demonstrated8–15 allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for ‘logical’ control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton12–17; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes18–22 associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug. PMID:15116117
Meshless analysis of shear deformable shells: the linear model
NASA Astrophysics Data System (ADS)
Costa, Jorge C.; Tiago, Carlos M.; Pimenta, Paulo M.
2013-10-01
This work develops a kinematically linear shell model departing from a consistent nonlinear theory. The initial geometry is mapped from a flat reference configuration by a stress-free finite deformation, after which, the actual shell motion takes place. The model maintains the features of a complete stress-resultant theory with Reissner-Mindlin kinematics based on an inextensible director. A hybrid displacement variational formulation is presented, where the domain displacements and kinematic boundary reactions are independently approximated. The resort to a flat reference configuration allows the discretization using 2-D Multiple Fixed Least-Squares (MFLS) on the domain. The consistent definition of stress resultants and consequent plane stress assumption led to a neat formulation for the analysis of shells. The consistent linear approximation, combined with MFLS, made possible efficient computations with a desired continuity degree, leading to smooth results for the displacement, strain and stress fields, as shown by several numerical examples.
NASA Astrophysics Data System (ADS)
Lin, S. J.
2015-12-01
The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.
A New Biogeochemical Computational Framework Integrated within the Community Land Model
NASA Astrophysics Data System (ADS)
Fang, Y.; Li, H.; Liu, C.; Huang, M.; Leung, L.
2012-12-01
Terrestrial biogeochemical processes, particularly carbon cycle dynamics, have been shown to significantly influence regional and global climate changes. Modeling terrestrial biogeochemical processes within the land component of Earth System Models such as the Community Land model (CLM), however, faces three major challenges: 1) extensive efforts in modifying modeling structures and rewriting computer programs to incorporate biogeochemical processes with increasing complexity, 2) expensive computational cost to solve the governing equations due to numerical stiffness inherited from large variations in the rates of biogeochemical processes, and 3) lack of an efficient framework to systematically evaluate various mathematical representations of biogeochemical processes. To address these challenges, we introduce a new computational framework to incorporate biogeochemical processes into CLM, which consists of a new biogeochemical module with a generic algorithm and reaction database. New and updated biogeochemical processes can be incorporated into CLM without significant code modification. To address the stiffness issue, algorithms and criteria will be developed to identify fast processes, which will be replaced with algebraic equations and decoupled from slow processes. This framework can serve as a generic and user-friendly platform to test out different mechanistic process representations and datasets and gain new insight on the behavior of the terrestrial ecosystems in response to climate change in a systematic way.
NASA Technical Reports Server (NTRS)
Neitzel, G. P.
1993-01-01
This project was concerned with the determination of conditions of guaranteed stability and instability for thermocapillary convection in a model of the float-zone crystal-growth process. This model, referred to as the half-zone, was studied extensively, both experimentally and theoretically. Our own earlier research determined, using energy-stability theory, sufficient conditions for stability to axisymmetric disturbances. Nearly all results computed were for the case of a liquid with Prandtl Number Pr = 1. Attempts to compute cases for higher Prandtl numbers to allow comparison with the experimental results of other researchers were unsuccessful, but indicated that the condition guaranteeing stability against axisymmetric disturbances would be a value of the Marangoni number (Ma), significantly higher than that at which oscillatory convection was observed experimentally. Thus, additional results were needed to round out the stability picture for this model problem. The research performed under this grant consisted of the following: (1) computation of energy-stability limits for non-axisymmetric disturbances; (2) computation of linear-stability limits for axisymmetric and non-axisymmetric disturbances; (3) numerical simulation of the basic state for half- and full-zones with a deformable free surface; and (4) incorporation of radiation heat transfer into a model energy-stability problem. Each of these is summarized briefly below.
Perspective: Stochastic magnetic devices for cognitive computing
NASA Astrophysics Data System (ADS)
Roy, Kaushik; Sengupta, Abhronil; Shim, Yong
2018-06-01
Stochastic switching of nanomagnets can potentially enable probabilistic cognitive hardware consisting of noisy neural and synaptic components. Furthermore, computational paradigms inspired from the Ising computing model require stochasticity for achieving near-optimality in solutions to various types of combinatorial optimization problems such as the Graph Coloring Problem or the Travelling Salesman Problem. Achieving optimal solutions in such problems are computationally exhaustive and requires natural annealing to arrive at the near-optimal solutions. Stochastic switching of devices also finds use in applications involving Deep Belief Networks and Bayesian Inference. In this article, we provide a multi-disciplinary perspective across the stack of devices, circuits, and algorithms to illustrate how the stochastic switching dynamics of spintronic devices in the presence of thermal noise can provide a direct mapping to the computational units of such probabilistic intelligent systems.
COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES
Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus
2017-01-01
The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875
ERIC Educational Resources Information Center
Goclowski, John C.; Baran, H. Anthony
This report gives a managerial overview of the Life Cycle Cost Impact Modeling System (LCCIM), which was designed to provide the Air Force with an in-house capability of assessing the life cycle cost impact of weapon system design alternatives. LCCIM consists of computer programs and the analyses which the user must perform to generate input data.…
Studies on Vapor Adsorption Systems
NASA Technical Reports Server (NTRS)
Shamsundar, N.; Ramotowski, M.
1998-01-01
The project consisted of performing experiments on single and dual bed vapor adsorption systems, thermodynamic cycle optimization, and thermal modeling. The work was described in a technical paper that appeared in conference proceedings and a Master's thesis, which were previously submitted to NASA. The present report describes some additional thermal modeling work done subsequently, and includes listings of computer codes developed during the project. Recommendations for future work are provided.
2013-04-01
Forces can be computed at specific angular positions, and geometrical parameters can be evaluated. Much higher resolution models are required, along...composition engines (C#, C++, Python, Java ) Desert operates on the CyPhy model, converting from a design space alternative structure to a set of design...consists of scripts to execute dymola, post-processing of results to create metrics, and general management of the job sequence. An earlier version created
Using Bayesian Networks for Candidate Generation in Consistency-based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Mengshoel, Ole
2008-01-01
Consistency-based diagnosis relies heavily on the assumption that discrepancies between model predictions and sensor observations can be detected accurately. When sources of uncertainty like sensor noise and model abstraction exist robust schemes have to be designed to make a binary decision on whether predictions are consistent with observations. This risks the occurrence of false alarms and missed alarms when an erroneous decision is made. Moreover when multiple sensors (with differing sensing properties) are available the degree of match between predictions and observations can be used to guide the search for fault candidates. In this paper we propose a novel approach to handle this problem using Bayesian networks. In the consistency- based diagnosis formulation, automatically generated Bayesian networks are used to encode a probabilistic measure of fit between predictions and observations. A Bayesian network inference algorithm is used to compute most probable fault candidates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korneeva, Anna; Shaydurov, Vladimir
In the paper, the data analysis is considered for thin-film thermoresistors coated on to a radio-electronic printed circuit board to determine possible zones of its overheating. A mathematical model consists in an underdetermined system of linear algebraic equations with an infinite set of solutions. For computing a more real solution, two additional conditions are used: the smoothness of a solution and the positiveness of an increase of temperature during overheating. Computational experiments demonstrate that an overheating zone is determined exactly with a tolerable accuracy of temperature in it.
Computer aided design environment for the analysis and design of multi-body flexible structures
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant V.; Singh, Ramen P.
1989-01-01
A computer aided design environment consisting of the programs NASTRAN, TREETOPS and MATLAB is presented in this paper. With links for data transfer between these programs, the integrated design of multi-body flexible structures is significantly enhanced. The CAD environment is used to model the Space Shuttle/Pinhole Occulater Facility. Then a controller is designed and evaluated in the nonlinear time history sense. Recent enhancements and ongoing research to add more capabilities are also described.
Computational model for simulation small testing launcher, technical solution
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Cristian, Barbu; Chelaru, Adrian
2014-12-01
The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project "Suborbital Launcher for Testing" (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.
Computational model for simulation small testing launcher, technical solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelaru, Teodor-Viorel, E-mail: teodor.chelaru@upb.ro; Cristian, Barbu, E-mail: barbucr@mta.ro; Chelaru, Adrian, E-mail: achelaru@incas.ro
The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper ismore » focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital vehicle programs towards the current technological necessities in the space field, especially the European one.« less
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O
2018-05-01
Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
Oku, Yoshitaka; Hülsmann, Swen
2017-04-07
The topology of the respiratory network in the brainstem has been addressed using different computational models, which help to understand the functional properties of the system. We tested a neural mass model by comparing the result of activation and inhibition of inhibitory neurons in silico with recently published results of optogenetic manipulation of glycinergic neurons [Sherman, et al. (2015) Nat Neurosci 18:408]. The comparison revealed that a five-cell type model consisting of three classes of inhibitory neurons [I-DEC, E-AUG, E-DEC (PI)] and two excitatory populations (pre-I/I) and (I-AUG) neurons can be applied to explain experimental observations made by stimulating or inhibiting inhibitory neurons by light sensitive ion channels. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1984-01-01
A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.
MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME
BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER
2017-01-01
Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618
NASA Technical Reports Server (NTRS)
Glaese, John R.; Tobbe, Patrick A.
1986-01-01
The Space Station Mechanism Test Bed consists of a hydraulically driven, computer controlled six degree of freedom (DOF) motion system with which docking, berthing, and other mechanisms can be evaluated. Measured contact forces and moments are provided to the simulation host computer to enable representation of orbital contact dynamics. This report describes the development of a generalized math model which represents the relative motion between two rigid orbiting vehicles. The model allows motion in six DOF for each body, with no vehicle size limitation. The rotational and translational equations of motion are derived. The method used to transform the forces and moments from the sensor location to the vehicles' centers of mass is also explained. Two math models of docking mechanisms, a simple translational spring and the Remote Manipulator System end effector, are presented along with simulation results. The translational spring model is used in an attempt to verify the simulation with compensated hardware in the loop results.
Myers, C E; Gluck, M A
1996-08-01
A previous model of hippocampal region function in classical conditioning is generalized to H. Eichenbaum, A. Fagan, P. Mathews, and N.J. Cohen's (1989) and H. Eichenbaum, A. Fagan, and N.J. Cohen's (1989) simultaneous odor discrimination studies in rats. The model assumes that the hippocampal region forms new stimulus representations that compress redundant information while differentiating predictie information; the piriform (olfactory) cortex meanwhile clusters similar and co-occurring odors. Hippocampal damage interrupts the ability to differentiate odor representations, while leaving piriform-mediated odor clustering unchecked. The result is a net tendency to overcompress in the lesioned model. Behavior in the model is very similar to that of the rats, including lesion deficits, facilitation of successively learned tasks, and transfer performance. The computational mechanisms underlying model performance are consistent with the qualitative interpretations suggested by Eichen baum et al. to explain their empirical data.
NASA Astrophysics Data System (ADS)
Kar, J. K.; Panda, Saswati; Rout, G. C.
2017-05-01
We propose here a tight binding model study of the interplay between charge and spin orderings in the CMR manganites taking anisotropic effect due to electron hoppings and spin exchanges. The Hamiltonian consists of the kinetic energies of eg and t2g electrons of manganese ion. It further includes double exchange and Heisenberg interactions. The charge density wave interaction (CDW) describes an extra mechanism for the insulating character of the system. The CDW gap and spin parameters are calculated using Zubarev's Green's function technique and computed self-consistently. The results are reported in this communication.
An Implementation Model for Integrated Learning Systems.
ERIC Educational Resources Information Center
Mills, Steven C.; Ragan, Tillman R.
This paper describes the development, validation, and research application of the Computer-Delivered Instruction Configuration Matrix (CDICM), an instrument for evaluating the implementation of Integrated Learning Systems (ILS). The CDICM consists of a 15-item checklist, describing the major components of implementation of ILS technology, to be…
Near-road air quality is an issue of emerging concern, with field studies consistently showing elevated air pollutant concentrations adjacent to major roads, usually decreasing to background levels within several hundred meters. Roadside barriers, both vegetative and structural, ...
Model-Based Design of Biochemical Microreactors
Elbinger, Tobias; Gahn, Markus; Neuss-Radu, Maria; Hante, Falk M.; Voll, Lars M.; Leugering, Günter; Knabner, Peter
2016-01-01
Mathematical modeling of biochemical pathways is an important resource in Synthetic Biology, as the predictive power of simulating synthetic pathways represents an important step in the design of synthetic metabolons. In this paper, we are concerned with the mathematical modeling, simulation, and optimization of metabolic processes in biochemical microreactors able to carry out enzymatic reactions and to exchange metabolites with their surrounding medium. The results of the reported modeling approach are incorporated in the design of the first microreactor prototypes that are under construction. These microreactors consist of compartments separated by membranes carrying specific transporters for the input of substrates and export of products. Inside the compartments of the reactor multienzyme complexes assembled on nano-beads by peptide adapters are used to carry out metabolic reactions. The spatially resolved mathematical model describing the ongoing processes consists of a system of diffusion equations together with boundary and initial conditions. The boundary conditions model the exchange of metabolites with the neighboring compartments and the reactions at the surface of the nano-beads carrying the multienzyme complexes. Efficient and accurate approaches for numerical simulation of the mathematical model and for optimal design of the microreactor are developed. As a proof-of-concept scenario, a synthetic pathway for the conversion of sucrose to glucose-6-phosphate (G6P) was chosen. In this context, the mathematical model is employed to compute the spatio-temporal distributions of the metabolite concentrations, as well as application relevant quantities like the outflow rate of G6P. These computations are performed for different scenarios, where the number of beads as well as their loading capacity are varied. The computed metabolite distributions show spatial patterns, which differ for different experimental arrangements. Furthermore, the total output of G6P increases for scenarios where microcompartimentation of enzymes occurs. These results show that spatially resolved models are needed in the description of the conversion processes. Finally, the enzyme stoichiometry on the nano-beads is determined, which maximizes the production of glucose-6-phosphate. PMID:26913283
McClelland, James L.
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered. PMID:23970868
CIELO-A GIS integrated model for climatic and water balance simulation in islands environments
NASA Astrophysics Data System (ADS)
Azevedo, E. B.; Pereira, L. S.
2003-04-01
The model CIELO (acronym for "Clima Insular à Escala Local") is a physically based model that simulates the climatic variables in an island using data from a single synoptic reference meteorological station. The reference station "knows" its position in the orographic and dynamic regime context. The domain of computation is a GIS raster grid parameterised with a digital elevation model (DEM). The grid is oriented following the direction of the air masses circulation through a specific algorithm named rotational terrain model (RTM). The model consists of two main sub-models. One, relative to the advective component simulation, assumes the Foehn effect to reproduce the dynamic and thermodynamic processes occurring when an air mass moves through the island orographic obstacle. This makes possible to simulate the air temperature, air humidity, cloudiness and precipitation as influenced by the orography along the air displacement. The second concerns the radiative component as affected by the clouds of orographic origin and by the shadow produced by the relief. The initial state parameters are computed starting from the reference meteorological station across the DEM transept until the sea level at the windward side. Then, starting from the sea level, the model computes the local scale meteorological parameters according to the direction of the air displacement, which is adjusted with the RTM. The air pressure, temperature and humidity are directly calculated for each cell in the computational grid, while several algorithms are used to compute the cloudiness, net radiation, evapotranspiration, and precipitation. The model presented in this paper has been calibrated and validated using data from some meteorological stations and a larger number of rainfall stations located at various elevations in the Azores Islands.
Part 2 of a Computational Study of a Drop-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
2004-01-01
This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.
McClelland, James L
2013-01-01
This article seeks to establish a rapprochement between explicitly Bayesian models of contextual effects in perception and neural network models of such effects, particularly the connectionist interactive activation (IA) model of perception. The article is in part an historical review and in part a tutorial, reviewing the probabilistic Bayesian approach to understanding perception and how it may be shaped by context, and also reviewing ideas about how such probabilistic computations may be carried out in neural networks, focusing on the role of context in interactive neural networks, in which both bottom-up and top-down signals affect the interpretation of sensory inputs. It is pointed out that connectionist units that use the logistic or softmax activation functions can exactly compute Bayesian posterior probabilities when the bias terms and connection weights affecting such units are set to the logarithms of appropriate probabilistic quantities. Bayesian concepts such the prior, likelihood, (joint and marginal) posterior, probability matching and maximizing, and calculating vs. sampling from the posterior are all reviewed and linked to neural network computations. Probabilistic and neural network models are explicitly linked to the concept of a probabilistic generative model that describes the relationship between the underlying target of perception (e.g., the word intended by a speaker or other source of sensory stimuli) and the sensory input that reaches the perceiver for use in inferring the underlying target. It is shown how a new version of the IA model called the multinomial interactive activation (MIA) model can sample correctly from the joint posterior of a proposed generative model for perception of letters in words, indicating that interactive processing is fully consistent with principled probabilistic computation. Ways in which these computations might be realized in real neural systems are also considered.
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-21
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
NASA Astrophysics Data System (ADS)
Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando
2015-07-01
We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke's law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers.
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...
2017-10-01
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey
Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less
NASA Technical Reports Server (NTRS)
Harman, R.; Blejer, D.
1990-01-01
The requirements and mathematical specifications for the Gamma Ray Observatory (GRO) Dynamics Simulator are presented. The complete simulator system, which consists of the profie subsystem, simulation control and input/output subsystem, truth model subsystem, onboard computer model subsystem, and postprocessor, is described. The simulator will be used to evaluate and test the attitude determination and control models to be used on board GRO under conditions that simulate the expected in-flight environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Frank; Dennis, John; MacCready, Parker
This project aimed to improve long term global climate simulations by resolving and enhancing the representation of the processes involved in the cycling of freshwater through estuaries and coastal regions. This was a collaborative multi-institution project consisting of physical oceanographers, climate model developers, and computational scientists. It specifically targeted the DOE objectives of advancing simulation and predictive capability of climate models through improvements in resolution and physical process representation.
Temporal Subtraction of Digital Breast Tomosynthesis Images for Improved Mass Detection
2008-10-01
K. Fishman and B. M. W. Tsui, "Development of a computer-generated model for the coronary arterial tree based on multislice CT and morphometric data...mathematical models based on geometric primitives8-22. Bakic et al created synthetic x-ray mammograms using a 3D simulated breast tissue model consisting of...utilized a combination of voxel matrices and geometric primitives to create a breast phantom that includes the breast surface, the duct system, and
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Liao, Wei-keng
Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface reference are also available to assist new users to learn to use the library.« less
Glass transition temperature of polymer nano-composites with polymer and filler interactions
NASA Astrophysics Data System (ADS)
Hagita, Katsumi; Takano, Hiroshi; Doi, Masao; Morita, Hiroshi
2012-02-01
We systematically studied versatile coarse-grained model (bead spring model) to describe filled polymer nano-composites for coarse-grained (Kremer-Grest model) molecular dynamics simulations. This model consists of long polymers, crosslink, and fillers. We used the hollow structure as the filler to describe rigid spherical fillers with small computing costs. Our filler model consists of surface particles of icosahedra fullerene structure C320 and a repulsive force from the center of the filler is applied to the surface particles in order to make a sphere and rigid. The filler's diameter is 12 times of beads of the polymers. As the first test of our model, we study temperature dependence of volumes of periodic boundary conditions under constant pressures through NPT constant Andersen algorithm. It is found that Glass transition temperature (Tg) decrease with increasing filler's volume fraction for the case of repulsive interaction between polymer and fillers and Tg weakly increase for attractive interaction.
Redshift space clustering of galaxies and cold dark matter model
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
Self-consistent radiation-based simulation of electric arcs: II. Application to gas circuit breakers
NASA Astrophysics Data System (ADS)
Iordanidis, A. A.; Franck, C. M.
2008-07-01
An accurate and robust method for radiative heat transfer simulation for arc applications was presented in the previous paper (part I). In this paper a self-consistent mathematical model based on computational fluid dynamics and a rigorous radiative heat transfer model is described. The model is applied to simulate switching arcs in high voltage gas circuit breakers. The accuracy of the model is proven by comparison with experimental data for all arc modes. The ablation-controlled arc model is used to simulate high current PTFE arcs burning in cylindrical tubes. Model accuracy for the lower current arcs is evaluated using experimental data on the axially blown SF6 arc in steady state and arc resistance measurements close to current zero. The complete switching process with the arc going through all three phases is also simulated and compared with the experimental data from an industrial circuit breaker switching test.
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
Simulation of wave propagation inside a human eye: acoustic eye model (AEM)
NASA Astrophysics Data System (ADS)
Požar, T.; Halilovič, M.; Horvat, D.; Petkovšek, R.
2018-02-01
The design and development of the acoustic eye model (AEM) is reported. The model consists of a computer-based simulation that describes the propagation of mechanical disturbance inside a simplified model of a human eye. The capabilities of the model are illustrated with examples, using different laser-induced initial loading conditions in different geometrical configurations typically occurring in ophthalmic medical procedures. The potential of the AEM is to predict the mechanical response of the treated eye tissue in advance, thus complementing other preliminary procedures preceding medical treatments.
[Numerical finite element modeling of custom car seat using computer aided design].
Huang, Xuqi; Singare, Sekou
2014-02-01
A good cushion can not only provide the sitter with a high comfort, but also control the distribution of the hip pressure to reduce the incidence of diseases. The purpose of this study is to introduce a computer-aided design (CAD) modeling method of the buttocks-cushion using numerical finite element (FE) simulation to predict the pressure distribution on the buttocks-cushion interface. The buttock and the cushion model geometrics were acquired from a laser scanner, and the CAD software was used to create the solid model. The FE model of a true seated individual was developed using ANSYS software (ANSYS Inc, Canonsburg, PA). The model is divided into two parts, i.e. the cushion model made of foam and the buttock model represented by the pelvis covered with a soft tissue layer. Loading simulations consisted of imposing a vertical force of 520N on the pelvis, corresponding to the weight of the user upper extremity, and then solving iteratively the system.
Implementation and validation of a wake model for low-speed forward flight
NASA Technical Reports Server (NTRS)
Komerath, Narayanan M.; Schreiber, Olivier A.
1987-01-01
The computer implementation and calculations of the induced velocities produced by a wake model consisting of a trailing vortex system defined from a prescribed time averaged downwash distribution are detailed. Induced velocities are computed by approximating each spiral turn by a pair of large straight vortex segments positioned at critical points relative to where the induced velocity is required. A remainder term for the rest of the spiral is added. This approach results in decreased computation time compared to classical models where each spiral turn is broken down in small straight vortex segments. The model includes features such a harmonic variation of circulation, downwash outside of the blade and/or outside the tip path plane, blade bound vorticity induced velocity with harmonic variation of circulation and time averaging. The influence of various options and parameters on the results are investigated and results are compared to experimental field measurements with which, a resonable agreement is obtained. The capabilities of the model as well as its extension possibilities are studied. The performance of the model in predicting the recently-acquired NASA Langley Inflow data base for a four-bladed rotor is compared to that of the Scully Free Wake code, a well-established program which requires much greater computational resources. It is found that the two codes predict the experimental data with essentially the same accuracy, and show the same trends.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Thermodynamics of quasideterministic digital computers
NASA Astrophysics Data System (ADS)
Chu, Dominique
2018-02-01
A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.
Simulating the evolution of glyphosate resistance in grains farming in northern Australia.
Thornby, David F; Walker, Steve R
2009-09-01
The evolution of resistance to herbicides is a substantial problem in contemporary agriculture. Solutions to this problem generally consist of the use of practices to control the resistant population once it evolves, and/or to institute preventative measures before populations become resistant. Herbicide resistance evolves in populations over years or decades, so predicting the effectiveness of preventative strategies in particular relies on computational modelling approaches. While models of herbicide resistance already exist, none deals with the complex regional variability in the northern Australian sub-tropical grains farming region. For this reason, a new computer model was developed. The model consists of an age- and stage-structured population model of weeds, with an existing crop model used to simulate plant growth and competition, and extensions to the crop model added to simulate seed bank ecology and population genetics factors. Using awnless barnyard grass (Echinochloa colona) as a test case, the model was used to investigate the likely rate of evolution under conditions expected to produce high selection pressure. Simulating continuous summer fallows with glyphosate used as the only means of weed control resulted in predicted resistant weed populations after approx. 15 years. Validation of the model against the paddock history for the first real-world glyphosate-resistant awnless barnyard grass population shows that the model predicted resistance evolution to within a few years of the real situation. This validation work shows that empirical validation of herbicide resistance models is problematic. However, the model simulates the complexities of sub-tropical grains farming in Australia well, and can be used to investigate, generate and improve glyphosate resistance prevention strategies.
Computer use in primary care practices in Canada
Anisimowicz, Yvonne; Bowes, Andrea E.; Thompson, Ashley E.; Miedema, Baukje; Hogg, William E.; Wong, Sabrina T.; Katz, Alan; Burge, Fred; Aubrey-Bassler, Kris; Yelland, Gregory S.; Wodchis, Walter P.
2017-01-01
Abstract Objective To examine the use of computers in primary care practices. Design The international Quality and Cost of Primary Care study was conducted in Canada in 2013 and 2014 using a descriptive cross-sectional survey method to collect data from practices across Canada. Participating practices filled out several surveys, one of them being the Family Physician Survey, from which this study collected its data. Setting All 10 Canadian provinces. Participants A total of 788 family physicians. Main outcome measures A computer use scale measured the extent to which family physicians integrated computers into their practices, with higher scores indicating a greater integration of computer use in practice. Analyses included t tests and 2 tests comparing new and traditional models of primary care on measures of computer use and electronic health record (EHR) use, as well as descriptive statistics. Results Nearly all (97.5%) physicians reported using a computer in their practices, with moderately high computer use scale scores (mean [SD] score of 5.97 [2.96] out of 9), and many (65.7%) reported using EHRs. Physicians with practices operating under new models of primary care reported incorporating computers into their practices to a greater extent (mean [SD] score of 6.55 [2.64]) than physicians operating under traditional models did (mean [SD] score of 5.33 [3.15]; t726.60 = 5.84; P < .001; Cohen d = 0.42, 95% CI 0.808 to 1.627) and were more likely to report using EHRs (73.8% vs 56.7%; χ12=25.43; P < .001; odds ratio = 2.15). Overall, there was a statistically significant variability in computer use across provinces. Conclusion Most family physicians in Canada have incorporated computers into their practices for administrative and scholarly activities; however, EHRs have not been adopted consistently across the country. Physicians with practices operating under the new, more collaborative models of primary care use computers more comprehensively and are more likely to use EHRs than those in practices operating under traditional models of primary care. PMID:28500211
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.
Design of a dynamic optical tissue phantom to model extravasation pharmacokinetics
NASA Astrophysics Data System (ADS)
Zhang, Jane Y.; Ergin, Aysegul; Andken, Kerry Lee; Sheng, Chao; Bigio, Irving J.
2010-02-01
We describe an optical tissue phantom that enables the simulation of drug extravasation from microvessels and validates computational compartmental models of drug delivery. The phantom consists of a microdialysis tubing bundle to simulate the permeable blood vessels, immersed in either an aqueous suspension of titanium dioxide (TiO2) or a TiO2 mixed agarose scattering medium. Drug administration is represented by a dye circulated through this porous microdialysis tubing bundle. Optical pharmacokinetic (OP) methods are used to measure changes in the absorption coefficient of the scattering medium due to the arrival and diffusion of the dye. We have established particle sizedependent concentration profiles over time of phantom drug delivery by intravenous (IV) and intra-arterial (IA) routes. Additionally, pharmacokinetic compartmental models are implemented in computer simulations for the conditions studied within the phantom. The simulated concentration-time profiles agree well with measurements from the phantom. The results are encouraging for future optical pharmacokinetic method development, both physical and computational, to understand drug extravasation under various physiological conditions.
The New Zealand gravimetric quasigeoid model 2017 that incorporates nationwide airborne gravimetry
NASA Astrophysics Data System (ADS)
McCubbine, J. C.; Amos, M. J.; Tontini, F. C.; Smith, E.; Winefied, R.; Stagpoole, V.; Featherstone, W. E.
2017-12-01
A one arc-minute resolution gravimetric quasigeoid model has been computed for New Zealand, covering the region 25°S -60°S and 160°E -170°W . It was calculated by Wong and Gore modified Stokes integration using the remove-compute-restore technique with the EIGEN-6C4 global gravity model as the reference field. The gridded gravity data used for the computation consisted of 40,677 land gravity observations, satellite altimetry-derived marine gravity anomalies, historical shipborne marine gravity observations and, importantly, approximately one million new airborne gravity observations. The airborne data were collected with the specific intention of reinforcing the shortcomings of the existing data in areas of rough topography inaccessible to land gravimetry and in coastal areas where shipborne gravimetry cannot be collected and altimeter-derived gravity anomalies are generally poor. The new quasigeoid has a nominal precision of ± 48 mm on comparison with GPS-levelling data, which is approximately 14 mm less than its predecessor NZGeoid09.
Russell, G.M.; Goodwin, C.R.
1987-01-01
Results of a two-dimensional, vertically averaged, computer simulation model of the Loxahatchee River estuary show that under typical low freshwater inflow and vertically well mixed conditions, water circulation is dominated by freshwater inflow rather than by tidal influence. The model can simulate tidal flow and circulation in the Loxahatchee River estuary under typical low freshwater inflow and vertically well mixed conditions, but is limited, however, to low-flow and well mixed conditions. Computed patterns of residual water transport show a consistent seaward flow from the northwest fork through the central embayment and out Jupiter Inlet to the Atlantic Ocean. A large residual seaward flow was computed from the North Intracoastal Waterway to the inlet channel. Although the tide produces large flood and ebb flows in the estuary, tide-induced residual transport rates are low in comparison with freshwater-induced residual transport. Model investigations of partly mixed or stratified conditions in the estuary need to await development of systems capable of simulating three-dimensional flow patterns. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.
Color appearance for photorealistic image synthesis
NASA Astrophysics Data System (ADS)
Marini, Daniele; Rizzi, Alessandro; Rossi, Maurizio
2000-12-01
Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.
A real-time computational model for estimating kinematics of ankle ligaments.
Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan
2016-01-01
An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.
User's manual for COAST 4: a code for costing and sizing tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sink, D. A.; Iwinski, E. M.
1979-09-01
The purpose of this report is to document the computer program COAST 4 for the user/analyst. COAST, COst And Size Tokamak reactors, provides complete and self-consistent size models for the engineering features of D-T burning tokamak reactors and associated facilities involving a continuum of performance including highly beam driven through ignited plasma devices. TNS (The Next Step) devices with no tritium breeding or electrical power production are handled as well as power producing and fissile producing fusion-fission hybrid reactors. The code has been normalized with a TFTR calculation which is consistent with cost, size, and performance data published in themore » conceptual design report for that device. Information on code development, computer implementation and detailed user instructions are included in the text.« less
Rehearsing decisions may help teenagers: an evaluation of a simulation game.
Alemi, F; Cherry, F; Meffert, G
1989-01-01
This paper presents a new approach to preventing adolescent pregnancy. Information alone is not sufficient to prevent teenage pregnancy. The teenagers ability to choose and remain committed to a decision also needs to be developed. Because decision making skills are best learned through practice in an environment with frequent feedback, we have developed a computer game which simulates the consequences of different sexual roles. In addition, the game is intended to increase communication about sex between teenagers and their role models (peers, teachers and/or parents). Increased communication is expected to reduce the feeling of guilt and lead to either consistent abstention from sex or consistent contraceptive use. The paper reports on the development of the computer game and the preliminary evaluation of its impact.
Comparing visual representations across human fMRI and computational vision
Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.
2013-01-01
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227
A computer model for the 30S ribosome subunit.
Kuntz, I D; Crippen, G M
1980-01-01
We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
Asynchronous Data Retrieval from an Object-Oriented Database
NASA Astrophysics Data System (ADS)
Gilbert, Jonathan P.; Bic, Lubomir
We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.
Crashworthiness of light aircraft fuselage structures: A numerical and experimental investigation
NASA Technical Reports Server (NTRS)
Nanyaro, A. P.; Tennyson, R. C.; Hansen, J. S.
1984-01-01
The dynamic behavior of aircraft fuselage structures subject to various impact conditions was investigated. An analytical model was developed based on a self-consistent finite element (CFE) formulation utilizing shell, curved beam, and stringer type elements. Equations of motion were formulated and linearized (i.e., for small displacements), although material nonlinearity was retained to treat local plastic deformation. The equations were solved using the implicit Newmark-Beta method with a frontal solver routine. Stiffened aluminum fuselage models were also tested in free flight using the UTIAS pendulum crash test facility. Data were obtained on dynamic strains, g-loads, and transient deformations (using high speed photography in the latter case) during the impact process. Correlations between tests and predicted results are presented, together with computer graphics, based on the CFE model. These results include level and oblique angle impacts as well as the free-flight crash test. Comparisons with a hybrid, lumped mass finite element computer model demonstrate that the CFE formulation provides the test overall agreement with impact test data for comparable computing costs.
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.
Systems Epidemiology: What’s in a Name?
Dammann, O.; Gray, P.; Gressens, P.; Wolkenhauer, O.; Leviton, A.
2014-01-01
Systems biology is an interdisciplinary effort to integrate molecular, cellular, tissue, organ, and organism levels of function into computational models that facilitate the identification of general principles. Systems medicine adds a disease focus. Systems epidemiology adds yet another level consisting of antecedents that might contribute to the disease process in populations. In etiologic and prevention research, systems-type thinking about multiple levels of causation will allow epidemiologists to identify contributors to disease at multiple levels as well as their interactions. In public health, systems epidemiology will contribute to the improvement of syndromic surveillance methods. We encourage the creation of computational simulation models that integrate information about disease etiology, pathogenetic data, and the expertise of investigators from different disciplines. PMID:25598870
Mind, Machine, and Creativity: An Artist's Perspective.
Sundararajan, Louise
2014-06-01
Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com.
Johnston, Matthew D
2017-12-01
Recent work of Johnston et al. has produced sufficient conditions on the structure of a chemical reaction network which guarantee that the corresponding discrete state space system exhibits an extinction event. The conditions consist of a series of systems of equalities and inequalities on the edges of a modified reaction network called a domination-expanded reaction network. In this paper, we present a computational implementation of these conditions written in Python and apply the program on examples drawn from the biochemical literature. We also run the program on 458 models from the European Bioinformatics Institute's BioModels Database and report our results. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigation of water droplet trajectories within the NASA icing research tunnel
NASA Technical Reports Server (NTRS)
Reehorst, Andrew; Ibrahim, Mounir
1995-01-01
Water droplet trajectories within the NASA Lewis Research Center's Icing Research Tunnel (IRT) were studied through computer analysis. Of interest was the influence of the wind tunnel contraction and wind tunnel model blockage on the water droplet trajectories. The computer analysis was carried out with a program package consisting of a three-dimensional potential panel code and a three-dimensional droplet trajectory code. The wind tunnel contraction was found to influence the droplet size distribution and liquid water content distribution across the test section from that at the inlet. The wind tunnel walls were found to have negligible influence upon the impingement of water droplets upon a wing model.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1992-01-01
The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.
Development of a realistic stress analysis for fatigue analysis of notched composite laminates
NASA Technical Reports Server (NTRS)
Humphreys, E. A.; Rosen, B. W.
1979-01-01
A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.
Mind, Machine, and Creativity: An Artist's Perspective
Sundararajan, Louise
2014-01-01
Harold Cohen is a renowned painter who has developed a computer program, AARON, to create art. While AARON has been hailed as one of the most creative AI programs, Cohen consistently rejects the claims of machine creativity. Questioning the possibility for AI to model human creativity, Cohen suggests in so many words that the human mind takes a different route to creativity, a route that privileges the relational, rather than the computational, dimension of cognition. This unique perspective on the tangled web of mind, machine, and creativity is explored by an application of three relational models of the mind to an analysis of Cohen's talks and writings, which are available on his website: www.aaronshome.com. PMID:25541564
DOT National Transportation Integrated Search
2010-11-01
This study developed an integrated control model to contend with the impact of off-ramp queue spillback on both the freeway and arterials within the interchange control boundaries. The entire control system consists of two levels that compute the opt...
ERIC Educational Resources Information Center
Mayer, Richard E.; Moreno, Roxana
1998-01-01
Multimedia learners (n=146 college students) were able to integrate words and computer-presented pictures more easily when the words were presented aurally rather than visually. This split-attention effect is consistent with a dual-processing model of working memory. (SLD)
A Graphical Database Interface for Casual, Naive Users.
ERIC Educational Resources Information Center
Burgess, Clifford; Swigger, Kathleen
1986-01-01
Describes the design of a database interface for infrequent users of computers which consists of a graphical display of a model of a database and a natural language query language. This interface was designed for and tested with physicians at the University of Texas Health Science Center in Dallas. (LRW)
MODELING THE NEAR-UV BAND OF GK STARS. II. NON-LTE MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ian Short, C.; Campbell, Eamonn A.; Pickup, Heather
We present a grid of atmospheric models and synthetic spectral energy distributions (SEDs) for late-type dwarfs and giants of solar and 1/3 solar metallicity with many opacity sources computed in self-consistent non-local thermodynamic equilibrium (NLTE), and compare them to the LTE grid of Short and Hauschildt (Paper I). We describe, for the first time, how the NLTE treatment affects the thermal equilibrium of the atmospheric structure (T({tau}) relation) and the SED as a finely sampled function of T{sub eff}, log g, and [A/H] among solar metallicity and mildly metal-poor red giants. We compare the computed SEDs to the library ofmore » observed spectrophotometry described in Paper I across the entire visible band, and in the blue and red regions of the spectrum separately. We find that for the giants of both metallicities, the NLTE models yield best-fit T{sub eff} values that are 30-90 K lower than those provided by LTE models, while providing greater consistency between log g values, and, for Arcturus, T{sub eff} values, fitted separately to the blue and red spectral regions. There is marginal evidence that NLTE models give more consistent best-fit T{sub eff} values between the red and blue bands for earlier spectral classes among the solar metallicity GK giants than they do for the later classes, but no model fits the blue-band spectrum well for any class. For the two dwarf spectral classes that we are able to study, the effect of NLTE on derived parameters is less significant. We compare our derived T{sub eff} values to several other spectroscopic and photometric T{sub eff} calibrations for red giants, including one that is less model dependent based on the infrared flux method (IRFM). We find that the NLTE models provide slightly better agreement to the IRFM calibration among the warmer stars in our sample, while giving approximately the same level of agreement for the cooler stars.« less
NASA Astrophysics Data System (ADS)
Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em
2017-01-01
Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.
NASA Technical Reports Server (NTRS)
Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.
2010-01-01
The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.
Energy regeneration model of self-consistent field of electron beams into electric power*
NASA Astrophysics Data System (ADS)
Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.
2016-04-01
We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.
Vortex Lattice UXO Mobility Model Integration
2015-03-01
law , no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB...predictions of the fate and transport of a broad-field UXO population are extremely sensitive to the initial state of that population, specifically: the...limit the model’s computational domain. This revised model software was built on the concept of interconnected geomorphic control cells consisting of
High-End Climate Science: Development of Modeling and Related Computing Capabilities
2000-12-01
toward strengthening research on key scientific issues. The Program has supported research that has led to substantial increases in knowledge , improved...provides overall direction and executive oversight of the USGCRP. Within this framework, agencies manage and coordinate Federally supported scientific...critical for the U.S. Global Change Research Program. Such models can be used to look backward to test the consistency of our knowledge of Earth system
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Cheng, Jeffrey Tao; Hamade, Mohamad; Merchant, Saumil N.; Rosowski, John J.; Harrington, Ellery; Furlong, Cosme
2013-01-01
Sound-induced motions of the surface of the tympanic membrane (TM) were measured using stroboscopic holography in cadaveric human temporal bones at frequencies between 0.2 and 18 kHz. The results are consistent with the combination of standing-wave-like modal motions and traveling-wave-like motions on the TM surface. The holographic techniques also quantified sound-induced displacements of the umbo of the malleus, as well as volume velocity of the TM. These measurements were combined with sound-pressure measurements near the TM to compute middle-ear input impedance and power reflectance at the TM. The results are generally consistent with other published data. A phenomenological model that behaved qualitatively like the data was used to quantify the relative magnitude and spatial frequencies of the modal and traveling-wave-like displacement components on the TM surface. This model suggests the modal magnitudes are generally larger than those of the putative traveling waves, and the computed wave speeds are much slower than wave speeds predicted by estimates of middle-ear delay. While the data are inconsistent with simple modal displacements of the TM, an alternate model based on the combination of modal motions in a lossy membrane can also explain these measurements without invoking traveling waves. PMID:23363110
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
Cerebral Microcirculation and Oxygen Tension in the Human Secondary Cortex
Linninger, A. A.; Gould, I. G.; Marinnan, T.; Hsu, C.-Y.; Chojecki, M.; Alaraj, A.
2013-01-01
The three-dimensional spatial arrangement of the cortical microcirculatory system is critical for understanding oxygen exchange between blood vessels and brain cells. A three-dimensional computer model of a 3 × 3 × 3 mm3 subsection of the human secondary cortex was constructed to quantify oxygen advection in the microcirculation, tissue oxygen perfusion, and consumption in the human cortex. This computer model accounts for all arterial, capillary and venous blood vessels of the cerebral microvascular bed as well as brain tissue occupying the extravascular space. Microvessels were assembled with optimization algorithms emulating angiogenic growth; a realistic capillary bed was built with space filling procedures. The extravascular tissue was modeled as a porous medium supplied with oxygen by advection–diffusion to match normal metabolic oxygen demand. The resulting synthetic computer generated network matches prior measured morphometrics and fractal patterns of the cortical microvasculature. This morphologically accurate, physiologically consistent, multi-scale computer network of the cerebral microcirculation predicts the oxygen exchange of cortical blood vessels with the surrounding gray matter. Oxygen tension subject to blood pressure and flow conditions were computed and validated for the blood as well as brain tissue. Oxygen gradients along arterioles, capillaries and veins agreed with in vivo trends observed recently in imaging studies within experimental tolerances and uncertainty. PMID:23842693
Development of FullWave : Hot Plasma RF Simulation Tool
NASA Astrophysics Data System (ADS)
Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei
2017-10-01
Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.
CFD and PIV Analysis of Hemodynamics in a Growing Intracranial Aneurysm
Raschi, Marcelo; Mut, Fernando; Byrne, Greg; Putman, Christopher M.; Tateshima, Satoshi; Viñuela, Fernando; Tanoue, Tetsuya; Tanishita, Kazuo; Cebral, Juan R.
2011-01-01
Hemodynamics is thought to be a fundamental factor in the formation, progression and rupture of cerebral aneurysms. Understanding these mechanisms is important to improve their rupture risk assessment and treatment. In this study we analyze the blood flow field in a growing cerebral aneurysm using experimental particle image velocimetry (PIV) and computational fluid dynamics (CFD) techniques. Patient-specific models were constructed from longitudinal 3D computed tomography angiography (CTA) images acquired at one-year intervals. Physical silicone models were constructed from the CTA images using rapid prototyping techniques and pulsatile flow fields were measured with PIV. Corresponding CFD models were created and run under matching flow conditions. Both flow fields were aligned, interpolated, and compared qualitatively by inspection and quantitatively by defining similarity measures between the PIV and CFD vector fields. Results showed that both flow fields were in good agreement. Specifically, both techniques provided consistent representations of the main intra-aneurysmal flow structures, and their change during the geometric evolution of the aneurysm. Despite differences observed mainly in the near wall region and the inherent limitations of each technique, the information derived is consistent and can be used to study the role of hemodynamics in the natural history of intracranial aneurysms. PMID:22548127
Ventilatory Patterning in a Mouse Model of Stroke
Koo, Brian B; Strohl, Kingman P; Gillombardo, Carl B; Jacono, Frank J
2010-01-01
Cheyne-Stokes respiration (CSR) is a breathing pattern characterized by waxing and waning of breath volume and frequency, and is often recognized following stroke, when causal pathways are often obscure. We used an animal model to address the hypothesis that cerebral infarction is a mechanism for producing breathing instability. Fourteen male A/J mice underwent either stroke (n=7) or sham (n=7) procedure. Ventilation was measured using whole body plethysmography. Respiratory rate (RR), tidal volume (VT) and minute ventilation (Ve) mean values and coefficient of variation were computed for ventilation and oscillatory behavior. In addition, the ventilatory data were computationally fit to models to quantify autocorrelation, mutual information, sample entropy and a nonlinear complexity index. At the same time post procedure, stroke when compared to sham animal breathing consisted of a lower RR and autocorrelation, higher coefficient of variation for VT and higher coefficient of variation for Ve. Mutual information and the nonlinear complexity index were higher in breathing following stroke which also demonstrated a waxing/waning pattern. The absence of stroke in the sham animals was verified anatomically. We conclude that ventilatory pattern following cerebral infarction demonstrated increased variability with increased nonlinear patterning and a waxing/waning pattern, consistent with CSR. PMID:20472101
Advancing Science through Mining Libraries, Ontologies, and Communities*
Evans, James A.; Rzhetsky, Andrey
2011-01-01
Life scientists today cannot hope to read everything relevant to their research. Emerging text-mining tools can help by identifying topics and distilling statements from books and articles with increased accuracy. Researchers often organize these statements into ontologies, consistent systems of reality claims. Like scientific thinking and interchange, however, text-mined information (even when accurately captured) is complex, redundant, sometimes incoherent, and often contradictory: it is rooted in a mixture of only partially consistent ontologies. We review work that models scientific reason and suggest how computational reasoning across ontologies and the broader distribution of textual statements can assess the certainty of statements and the process by which statements become certain. With the emergence of digitized data regarding networks of scientific authorship, institutions, and resources, we explore the possibility of accounting for social dependences and cultural biases in reasoning models. Computational reasoning is starting to fill out ontologies and flag internal inconsistencies in several areas of bioscience. In the not too distant future, scientists may be able to use statements and rich models of the processes that produced them to identify underexplored areas, resurrect forgotten findings and ideas, deconvolute the spaghetti of underlying ontologies, and synthesize novel knowledge and hypotheses. PMID:21566119
The Primary Care Computer Simulation: Optimal Primary Care Manager Empanelment.
1997-05-01
explored in which a team consisted of two providers, two nurses, and a nurse aide . Each team had a specific exam room assigned to them. Additionally, a...team consisting of one provider, one nurse, and one nurse aide was simulated. The model also examined the effects of adding two exam rooms. The study...minutes. The optimal solution, which reduced patient time to below 90 minutes, was the mix of one provider, a nurse, and a nurse aide in which each
Simplified and refined structural modeling for economical flutter analysis and design
NASA Technical Reports Server (NTRS)
Ricketts, R. H.; Sobieszczanski, J.
1977-01-01
A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.