ERIC Educational Resources Information Center
Springer, Michael T.
2014-01-01
Several articles suggest how to incorporate computer models into the organic chemistry laboratory, but relatively few papers discuss how to incorporate these models broadly into the organic chemistry lecture. Previous research has suggested that "manipulating" physical or computer models enhances student understanding; this study…
43 CFR 11.40 - What are type A procedures?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...
43 CFR 11.40 - What are type A procedures?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 11.40 Public Lands: Interior Office of the Secretary of the Interior NATURAL RESOURCE DAMAGE... marine environments incorporates a computer model called the Natural Resource Damage Assessment Model for... environments incorporates a computer model called the Natural Resource Damage Assessment Model for Great Lakes...
ERIC Educational Resources Information Center
Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima
2009-01-01
A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…
A generic biogeochemical module for earth system models
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.-Y.; Leung, L. R.
2013-06-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into earth system models (e.g. community land models - CLM), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into the CLM model. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
A SINDA thermal model using CAD/CAE technologies
NASA Technical Reports Server (NTRS)
Rodriguez, Jose A.; Spencer, Steve
1992-01-01
The approach to thermal analysis described by this paper is a technique that incorporates Computer Aided Design (CAD) and Computer Aided Engineering (CAE) to develop a thermal model that has the advantages of Finite Element Methods (FEM) without abandoning the unique advantages of Finite Difference Methods (FDM) in the analysis of thermal systems. The incorporation of existing CAD geometry, the powerful use of a pre and post processor and the ability to do interdisciplinary analysis, will be described.
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.; Leung, L. R.
2013-11-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into Earth system models (e.g., community land models (CLMs)), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module, Next Generation BioGeoChemical Module (NGBGC), version 1.0, with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter, and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into CLM. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems. The method presented here could in theory be applied to simulate biogeochemical cycles in other Earth system models.
Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors
NASA Astrophysics Data System (ADS)
Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.
2008-05-01
This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.
A New Biogeochemical Computational Framework Integrated within the Community Land Model
NASA Astrophysics Data System (ADS)
Fang, Y.; Li, H.; Liu, C.; Huang, M.; Leung, L.
2012-12-01
Terrestrial biogeochemical processes, particularly carbon cycle dynamics, have been shown to significantly influence regional and global climate changes. Modeling terrestrial biogeochemical processes within the land component of Earth System Models such as the Community Land model (CLM), however, faces three major challenges: 1) extensive efforts in modifying modeling structures and rewriting computer programs to incorporate biogeochemical processes with increasing complexity, 2) expensive computational cost to solve the governing equations due to numerical stiffness inherited from large variations in the rates of biogeochemical processes, and 3) lack of an efficient framework to systematically evaluate various mathematical representations of biogeochemical processes. To address these challenges, we introduce a new computational framework to incorporate biogeochemical processes into CLM, which consists of a new biogeochemical module with a generic algorithm and reaction database. New and updated biogeochemical processes can be incorporated into CLM without significant code modification. To address the stiffness issue, algorithms and criteria will be developed to identify fast processes, which will be replaced with algebraic equations and decoupled from slow processes. This framework can serve as a generic and user-friendly platform to test out different mechanistic process representations and datasets and gain new insight on the behavior of the terrestrial ecosystems in response to climate change in a systematic way.
Analysis of rocket engine injection combustion processes
NASA Technical Reports Server (NTRS)
Salmon, J. W.; Saltzman, D. H.
1977-01-01
Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; ...
2017-11-06
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less
NASA Astrophysics Data System (ADS)
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro; Lim, Hojun; Littlewood, David J.
2018-02-01
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alleman, Coleman N.; Foulk, James W.; Mota, Alejandro
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. In order to resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. Here, the authors demonstrate the use of concurrent multiscale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeled withmore » a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plasticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. In this study, the framework is applied to model incipient localization in tensile specimens during necking.« less
NASA Astrophysics Data System (ADS)
Mistrík, Pavel; Ashmore, Jonathan
2009-02-01
We describe a large scale computational model of electrical current flow in the cochlea which is constructed by a flexible Modified Nodal Analysis algorithm to incorporate electrical components representing hair cells and the intercellular radial and longitudinal current flow. The model is used as a laboratory to study the effects of changing longitudinal gap junctional coupling, and shows the way in which cochlear microphonic spreads and tuning is affected. The process for incorporating mechanical longitudinal coupling and feedback is described. We find a difference in tuning and attenuation depending on whether longitudinal or radial couplings are altered.
Formulation of additional observables for ENTREE
NASA Technical Reports Server (NTRS)
Findlay, J. T.; Heck, M. L.
1980-01-01
The S-band X and Y angles, SAMS, and TACAN range and bearing were incorporated into the ENTREE software for use by experimenters at LaRC for entry trajectory reconstruction purposes. Background discussions present the need for this added capability. Formulations for the various observables are presented. Both north-south and east-west antenna mounts were provided for in the S-band angle computations. Sub-vehicle terrain height variations are included in the SAMS model. Local magnetic variations were incorporated for the TACAN bearing computations. Observable formulations are discussed in detail along with the partial computations.
Modeling the Contribution of Phonotactic Cues to the Problem of Word Segmentation
ERIC Educational Resources Information Center
Blanchard, Daniel; Heinz, Jeffrey; Golinkoff, Roberta
2010-01-01
How do infants find the words in the speech stream? Computational models help us understand this feat by revealing the advantages and disadvantages of different strategies that infants might use. Here, we outline a computational model of word segmentation that aims both to incorporate cues proposed by language acquisition researchers and to…
Artificial Intelligence and the High School Computer Curriculum.
ERIC Educational Resources Information Center
Dillon, Richard W.
1993-01-01
Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.
1979-01-01
A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.
NexGen PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models
We examine how the integration of evolutionary and ecological processes in population dynamics – an emerging framework in ecology – could be incorporated into population viability analysis (PVA). Driven by parallel, complementary advances in population genomics and computational ...
Higher order turbulence closure models
NASA Technical Reports Server (NTRS)
Amano, Ryoichi S.; Chai, John C.; Chen, Jau-Der
1988-01-01
Theoretical models are developed and numerical studies conducted on various types of flows including both elliptic and parabolic. The purpose of this study is to find better higher order closure models for the computations of complex flows. This report summarizes three new achievements: (1) completion of the Reynolds-stress closure by developing a new pressure-strain correlation; (2) development of a parabolic code to compute jets and wakes; and, (3) application to a flow through a 180 deg turnaround duct by adopting a boundary fitted coordinate system. In the above mentioned models near-wall models are developed for pressure-strain correlation and third-moment, and incorporated into the transport equations. This addition improved the results considerably and is recommended for future computations. A new parabolic code to solve shear flows without coordinate tranformations is developed and incorporated in this study. This code uses the structure of the finite volume method to solve the governing equations implicitly. The code was validated with the experimental results available in the literature.
Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Lepsch, Roger A., Jr.
2000-01-01
Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Shang, Eric K; Nathan, Derek P; Sprinkle, Shanna R; Fairman, Ronald M; Bavaria, Joseph E; Gorman, Robert C; Gorman, Joseph H; Jackson, Benjamin M
2013-09-10
Wall stress calculated using finite element analysis has been used to predict rupture risk of aortic aneurysms. Prior models often assume uniform aortic wall thickness and fusiform geometry. We examined the effects of including local wall thickness, intraluminal thrombus, calcifications, and saccular geometry on peak wall stress (PWS) in finite element analysis of descending thoracic aortic aneurysms. Computed tomographic angiography of descending thoracic aortic aneurysms (n=10 total, 5 fusiform and 5 saccular) underwent 3-dimensional reconstruction with custom algorithms. For each aneurysm, an initial model was constructed with uniform wall thickness. Experimental models explored the addition of variable wall thickness, calcifications, and intraluminal thrombus. Each model was loaded with 120 mm Hg pressure, and von Mises PWS was computed. The mean PWS of uniform wall thickness models was 410 ± 111 kPa. The imposition of variable wall thickness increased PWS (481 ± 126 kPa, P<0.001). Although the addition of calcifications was not statistically significant (506 ± 126 kPa, P=0.07), the addition of intraluminal thrombus to variable wall thickness (359 ± 86 kPa, P ≤ 0.001) reduced PWS. A final model incorporating all features also reduced PWS (368 ± 88 kPa, P<0.001). Saccular geometry did not increase diameter-normalized stress in the final model (77 ± 7 versus 67 ± 12 kPa/cm, P=0.22). Incorporation of local wall thickness can significantly increase PWS in finite element analysis models of thoracic aortic aneurysms. Incorporating variable wall thickness, intraluminal thrombus, and calcifications significantly impacts computed PWS of thoracic aneurysms; sophisticated models may, therefore, be more accurate in assessing rupture risk. Saccular aneurysms did not demonstrate a significantly higher normalized PWS than fusiform aneurysms.
There is international concern about chemicals that alter endocrine system function in humans and/or wildlife and subsequently cause adverse effects. We previously developed a mechanistic computational model of the hypothalamic-pituitary-gonadal (HPG) axis in female fathead minno...
The Acceptance of Computer Technology by Teachers in Early Childhood Education
ERIC Educational Resources Information Center
Jeong, Hye In; Kim, Yeolib
2017-01-01
This study investigated kindergarten teachers' decision-making process regarding the acceptance of computer technology. We incorporated the Technology Acceptance Model framework, in addition to computer self-efficacy, subjective norm, and personal innovativeness in education technology as external variables. The data were obtained from 160…
Stability and Hopf bifurcation for a delayed SLBRS computer virus model.
Zhang, Zizhen; Yang, Huizhong
2014-01-01
By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results.
Fang, Jing-Jing; Liu, Jia-Kuang; Wu, Tzu-Chieh; Lee, Jing-Wei; Kuo, Tai-Hong
2013-05-01
Computer-aided design has gained increasing popularity in clinical practice, and the advent of rapid prototyping technology has further enhanced the quality and predictability of surgical outcomes. It provides target guides for complex bony reconstruction during surgery. Therefore, surgeons can efficiently and precisely target fracture restorations. Based on three-dimensional models generated from a computed tomographic scan, precise preoperative planning simulation on a computer is possible. Combining the interdisciplinary knowledge of surgeons and engineers, this study proposes a novel surgical guidance method that incorporates a built-in occlusal wafer that serves as the positioning reference.Two patients with complex facial deformity suffering from severe facial asymmetry problems were recruited. In vitro facial reconstruction was first rehearsed on physical models, where a customized surgical guide incorporating a built-in occlusal stent as the positioning reference was designed to implement the surgery plan. This study is intended to present the authors' preliminary experience in a complex facial reconstruction procedure. It suggests that in regions with less information, where intraoperative computed tomographic scans or navigation systems are not available, our approach could be an effective, expedient, straightforward aid to enhance surgical outcome in a complex facial repair.
Quantitative Predictive Models for Systemic Toxicity (SOT)
Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...
FORBEEF: A Forage-Livestock System Computer Model Used as a Teaching Aid for Decision Making.
ERIC Educational Resources Information Center
Stringer, W. C.; And Others
1987-01-01
Describes the development of a computer simulation model of forage-beef production systems, which is intended to incorporate soil, forage, and animal decisions into an enterprise scenario. Produces a summary of forage production and livestock needs. Cites positive assessment of the program's value by participants in inservice training workshops.…
Realizing the Promise of Visualization in the Theory of Computing
ERIC Educational Resources Information Center
Cogliati, Joshua J.; Goosey, Frances W.; Grinder, Michael T.; Pascoe, Bradley A.; Ross, Rockford J.; Williams, Cheston J.
2005-01-01
Progress on a hypertextbook on the theory of computing is presented. The hypertextbook is a novel teaching and learning resource built around web technologies that incorporates text, sound, pictures, illustrations, slide shows, video clips, and--most importantly--active learning models of the key concepts of the theory of computing into an…
Computational models for the nonlinear analysis of reinforced concrete plates
NASA Technical Reports Server (NTRS)
Hinton, E.; Rahman, H. H. A.; Huq, M. M.
1980-01-01
A finite element computational model for the nonlinear analysis of reinforced concrete solid, stiffened and cellular plates is briefly outlined. Typically, Mindlin elements are used to model the plates whereas eccentric Timoshenko elements are adopted to represent the beams. The layering technique, common in the analysis of reinforced concrete flexural systems, is incorporated in the model. The proposed model provides an inexpensive and reasonably accurate approach which can be extended for use with voided plates.
Metal-water reaction and cladding deformation models for RELAP5/MOD3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caraher, D.L.; Shumway, R.W.
1989-06-01
A model for calculating the reaction of zirconium with steam according to the Cathcart-Pawel correlation has been incorporated into RELAP5/MOD3. A cladding deformation model which computes swelling and rupture of the cladding according to the empirical correlations for Powers and Meyer has also been incorporated into RELAP5/MOD3. This report gives the background of the models, documents their implantation into the RELAP5 subroutines, and reports the developmental assessment done on the models. 4 refs., 9 figs., 9 tabs.
Levy
1996-08-01
New interactive computer technologies are having a significant influence on medical education, training, and practice. The newest innovation in computer technology, virtual reality, allows an individual to be immersed in a dynamic computer-generated, three-dimensional environment and can provide realistic simulations of surgical procedures. A new virtual reality hysteroscope passes through a sensing device that synchronizes movements with a three-dimensional model of a uterus. Force feedback is incorporated into this model, so the user actually experiences the collision of an instrument against the uterine wall or the sensation of the resistance or drag of a resectoscope as it cuts through a myoma in a virtual environment. A variety of intrauterine pathologies and procedures are simulated, including hyperplasia, cancer, resection of a uterine septum, polyp, or myoma, and endometrial ablation. This technology will be incorporated into comprehensive training programs that will objectively assess hand-eye coordination and procedural skills. It is possible that by incorporating virtual reality into hysteroscopic training programs, a decrease in the learning curve and the number of complications presently associated with the procedures may be realized. Prospective studies are required to assess these potential benefits.
Incorporating approximation error in surrogate based Bayesian inversion
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.; Li, W.; Wu, L.
2015-12-01
There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
Two papers are included in this progress report. In the first, the compressible Navier-Stokes equations have been used to compute leading edge receptivity of boundary layers over parabolic cylinders. Natural receptivity at the leading edge was simulated and Tollmien-Schlichting waves were observed to develop in response to an acoustic disturbance, applied through the farfield boundary conditions. To facilitate comparison with previous work, all computations were carried out at a free stream Mach number of 0.3. The spatial and temporal behavior of the flowfields are calculated through the use of finite volume algorithms and Runge-Kutta integration. The results are dominated by strong decay of the Tollmien-Schlichting wave due to the presence of the mean flow favorable pressure gradient. The effects of numerical dissipation, forcing frequency, and nose radius are studied. The Strouhal number is shown to have the greatest effect on the unsteady results. In the second paper, a transition model for low-speed flows, previously developed by Young et al., which incorporates first-mode (Tollmien-Schlichting) disturbance information from linear stability theory has been extended to high-speed flow by incorporating the effects of second mode disturbances. The transition model is incorporated into a Reynolds-averaged Navier-Stokes solver with a one-equation turbulence model. Results using a variable turbulent Prandtl number approach demonstrate that the current model accurately reproduces available experimental data for first and second-mode dominated transitional flows. The performance of the present model shows significant improvement over previous transition modeling attempts.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
ERIC Educational Resources Information Center
Lund, David M.; Hildreth, Donna
A case study investigated an instructional model that incorporated the personal computer and Hyperstudio (tm) software into an assignment to write and illustrate an interactive, multimedia story. Subjects were 21 students in a fifth-grade homeroom in a public school (with a state-mandated minimum 45% ratio of minority students achieved by busing…
Stability and Hopf Bifurcation for a Delayed SLBRS Computer Virus Model
Yang, Huizhong
2014-01-01
By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results. PMID:25202722
Energy and life-cycle cost analysis of a six-story office building
NASA Astrophysics Data System (ADS)
Turiel, I.
1981-10-01
An energy analysis computer program, DOE-2, was used to compute annual energy use for a typical office building as originally designed and with several energy conserving design modifications. The largest energy use reductions were obtained with the incorporation of daylighting techniques, the use of double pane windows, night temperature setback, and the reduction of artificial lighting levels. A life-cycle cost model was developed to assess the cost-effectiveness of the design modifications discussed. The model incorporates such features as inclusion of taxes, depreciation, and financing of conservation investments. The energy conserving strategies are ranked according to economic criteria such as net present benefit, discounted payback period, and benefit to cost ratio.
Damsel: A Data Model Storage Library for Exascale Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koziol, Quincey
The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. We will accomplish this through three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community.
Examples of Nonconservatism in the CARE 3 Program
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1988-01-01
This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.
Fatone, Stefania; Johnson, William Brett; Tucker, Kerice
2016-04-01
Misalignment of an articulated ankle-foot orthosis joint axis with the anatomic joint axis may lead to discomfort, alterations in gait, and tissue damage. Theoretical, two-dimensional models describe the consequences of misalignments, but cannot capture the three-dimensional behavior of ankle-foot orthosis use. The purpose of this project was to develop a model to describe the effects of ankle-foot orthosis ankle joint misalignment in three dimensions. Computational simulation. Three-dimensional scans of a leg and ankle-foot orthosis were incorporated into a link segment model where the ankle-foot orthosis joint axis could be misaligned with the anatomic ankle joint axis. The leg/ankle-foot orthosis interface was modeled as a network of nodes connected by springs to estimate interface pressure. Motion between the leg and ankle-foot orthosis was calculated as the ankle joint moved through a gait cycle. While the three-dimensional model corroborated predictions of the previously published two-dimensional model that misalignments in the anterior -posterior direction would result in greater relative motion compared to misalignments in the proximal -distal direction, it provided greater insight showing that misalignments have asymmetrical effects. The three-dimensional model has been incorporated into a freely available computer program to assist others in understanding the consequences of joint misalignments. Models and simulations can be used to gain insight into functioning of systems of interest. We have developed a three-dimensional model to assess the effect of ankle joint axis misalignments in ankle-foot orthoses. The model has been incorporated into a freely available computer program to assist understanding of trainees and others interested in orthotics. © The International Society for Prosthetics and Orthotics 2014.
How Haptic Size Sensations Improve Distance Perception
Battaglia, Peter W.; Kersten, Daniel; Schrater, Paul R.
2011-01-01
Determining distances to objects is one of the most ubiquitous perceptual tasks in everyday life. Nevertheless, it is challenging because the information from a single image confounds object size and distance. Though our brains frequently judge distances accurately, the underlying computations employed by the brain are not well understood. Our work illuminates these computions by formulating a family of probabilistic models that encompass a variety of distinct hypotheses about distance and size perception. We compare these models' predictions to a set of human distance judgments in an interception experiment and use Bayesian analysis tools to quantitatively select the best hypothesis on the basis of its explanatory power and robustness over experimental data. The central question is: whether, and how, human distance perception incorporates size cues to improve accuracy. Our conclusions are: 1) humans incorporate haptic object size sensations for distance perception, 2) the incorporation of haptic sensations is suboptimal given their reliability, 3) humans use environmentally accurate size and distance priors, 4) distance judgments are produced by perceptual “posterior sampling”. In addition, we compared our model's estimated sensory and motor noise parameters with previously reported measurements in the perceptual literature and found good correspondence between them. Taken together, these results represent a major step forward in establishing the computational underpinnings of human distance perception and the role of size information. PMID:21738457
Modeling the state dependent impulse control for computer virus propagation under media coverage
NASA Astrophysics Data System (ADS)
Liang, Xiyin; Pei, Yongzhen; Lv, Yunfei
2018-02-01
A state dependent impulsive control model is proposed to model the spread of computer virus incorporating media coverage. By the successor function, the sufficient conditions for the existence and uniqueness of order-1 periodic solution are presented first. Secondly, for two classes of periodic solutions, the geometric property of successor function and the analogue of the Poincaré criterion are employed to obtain the stability results. These results show that the number of the infective computers is under the threshold all the time. Finally, the theoretic and numerical analysis show that media coverage can delay the spread of computer virus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems
NASA Astrophysics Data System (ADS)
Ku, Walter H.; Gang, Guan-Wan; He, J. Q.; Ichitsubo, I.
1988-05-01
This final technical report presents results on the computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems. New results include analytical and computer aided device models of GaAs MESFETs and HEMTs or MODFETs, new synthesis techniques for monolithic feedback and distributed amplifiers and a new nonlinear CAD program for MIMIC called CADNON. This program incorporates the new MESFET and HEMT model and has been successfully applied to the design of monolithic millimeter-wave mixers.
MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process
Harbaugh, Arlen W.
2005-01-01
This report presents MODFLOW-2005, which is a new version of the finite-difference ground-water model commonly called MODFLOW. Ground-water flow is simulated using a block-centered finite-difference approach. Layers can be simulated as confined or unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and rivers, also can be simulated. The report includes detailed explanations of physical and mathematical concepts on which the model is based, an explanation of how those concepts are incorporated in the modular structure of the computer program, instructions for using the model, and details of the computer code. The modular structure consists of a MAIN Program and a series of highly independent subroutines. The subroutines are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system that is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving the set of simultaneous equations resulting from the finite-difference method. Several solution methods are incorporated, including the Preconditioned Conjugate-Gradient method. The division of the program into packages permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program also are designed to permit maximum flexibility. The program is designed to allow other capabilities, such as transport and optimization, to be incorporated, but this report is limited to describing the ground-water flow capability. The program is written in Fortran 90 and will run without modification on most computers that have a Fortran 90 compiler.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
Computer use in primary care practices in Canada.
Anisimowicz, Yvonne; Bowes, Andrea E; Thompson, Ashley E; Miedema, Baukje; Hogg, William E; Wong, Sabrina T; Katz, Alan; Burge, Fred; Aubrey-Bassler, Kris; Yelland, Gregory S; Wodchis, Walter P
2017-05-01
To examine the use of computers in primary care practices. The international Quality and Cost of Primary Care study was conducted in Canada in 2013 and 2014 using a descriptive cross-sectional survey method to collect data from practices across Canada. Participating practices filled out several surveys, one of them being the Family Physician Survey, from which this study collected its data. All 10 Canadian provinces. A total of 788 family physicians. A computer use scale measured the extent to which family physicians integrated computers into their practices, with higher scores indicating a greater integration of computer use in practice. Analyses included t tests and 2 tests comparing new and traditional models of primary care on measures of computer use and electronic health record (EHR) use, as well as descriptive statistics. Nearly all (97.5%) physicians reported using a computer in their practices, with moderately high computer use scale scores (mean [SD] score of 5.97 [2.96] out of 9), and many (65.7%) reported using EHRs. Physicians with practices operating under new models of primary care reported incorporating computers into their practices to a greater extent (mean [SD] score of 6.55 [2.64]) than physicians operating under traditional models did (mean [SD] score of 5.33 [3.15]; t 726.60 = 5.84; P < .001; Cohen d = 0.42, 95% CI 0.808 to 1.627) and were more likely to report using EHRs (73.8% vs 56.7%; [Formula: see text]; P < .001; odds ratio = 2.15). Overall, there was a statistically significant variability in computer use across provinces. Most family physicians in Canada have incorporated computers into their practices for administrative and scholarly activities; however, EHRs have not been adopted consistently across the country. Physicians with practices operating under the new, more collaborative models of primary care use computers more comprehensively and are more likely to use EHRs than those in practices operating under traditional models of primary care. Copyright© the College of Family Physicians of Canada.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Formal Specification of Information Systems Requirements.
ERIC Educational Resources Information Center
Kampfner, Roberto R.
1985-01-01
Presents a formal model for specification of logical requirements of computer-based information systems that incorporates structural and dynamic aspects based on two separate models: the Logical Information Processing Structure and the Logical Information Processing Network. The model's role in systems development is discussed. (MBR)
PBPK models provide a computational framework for incorporating pertinent physiological and biochemical information to estimate in vivo levels of xenobiotics in biological tissues. In general, PBPK models are used to correlate exposures to target tissue levels of chemicals and th...
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
NASA Astrophysics Data System (ADS)
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Combining Statistics and Physics to Improve Climate Downscaling
NASA Astrophysics Data System (ADS)
Gutmann, E. D.; Eidhammer, T.; Arnold, J.; Nowak, K.; Clark, M. P.
2017-12-01
Getting useful information from climate models is an ongoing problem that has plagued climate science and hydrologic prediction for decades. While it is possible to develop statistical corrections for climate models that mimic current climate almost perfectly, this does not necessarily guarantee that future changes are portrayed correctly. In contrast, convection permitting regional climate models (RCMs) have begun to provide an excellent representation of the regional climate system purely from first principles, providing greater confidence in their change signal. However, the computational cost of such RCMs prohibits the generation of ensembles of simulations or long time periods, thus limiting their applicability for hydrologic applications. Here we discuss a new approach combining statistical corrections with physical relationships for a modest computational cost. We have developed the Intermediate Complexity Atmospheric Research model (ICAR) to provide a climate and weather downscaling option that is based primarily on physics for a fraction of the computational requirements of a traditional regional climate model. ICAR also enables the incorporation of statistical adjustments directly within the model. We demonstrate that applying even simple corrections to precipitation while the model is running can improve the simulation of land atmosphere feedbacks in ICAR. For example, by incorporating statistical corrections earlier in the modeling chain, we permit the model physics to better represent the effect of mountain snowpack on air temperature changes.
Chill Down Process of Hydrogen Transport Pipelines
NASA Technical Reports Server (NTRS)
Mei, Renwei; Klausner, James
2006-01-01
A pseudo-steady model has been developed to predict the chilldown history of pipe wall temperature in the horizontal transport pipeline for cryogenic fluids. A new film boiling heat transfer model is developed by incorporating the stratified flow structure for cryogenic chilldown. A modified nucleate boiling heat transfer correlation for cryogenic chilldown process inside a horizontal pipe is proposed. The efficacy of the correlations is assessed by comparing the model predictions with measured values of wall temperature in several azimuthal positions in a well controlled experiment by Chung et al. (2004). The computed pipe wall temperature histories match well with the measured results. The present model captures important features of thermal interaction between the pipe wall and the cryogenic fluid, provides a simple and robust platform for predicting pipe wall chilldown history in long horizontal pipe at relatively low computational cost, and builds a foundation to incorporate the two-phase hydrodynamic interaction in the chilldown process.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its incorporation in an existing methodology for the prediction of nozzle boattail pressures, is discussed. The model accounts for the detailed turbulence and thermochemical processes occurring in the mixing layer formed between a jet exhaust and surrounding external stream while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the BOAT model to analyze simple free shear flows is assessed by comparisons with fundamental laboratory data. The overlaid procedure for incorporating variable pressures into BOAT and the entrainment correction employed to yield an effective plume boundary for the inviscid external flow are demonstrated. This is accomplished via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid methodology for determining nozzle boattail drag for subsonic/transonic external flows.
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Eco-Evo PVAs: Incorporating Eco-Evolutionary Processes into Population Viability Models
We synthesize how advances in computational methods and population genomics can be combined within an Ecological-Evolutionary (Eco-Evo) PVA model. Eco-Evo PVA models are powerful new tools for understanding the influence of evolutionary processes on plant and animal population pe...
Computer-Assisted Community Planning and Decision Making.
ERIC Educational Resources Information Center
College of the Atlantic, Bar Harbor, ME.
The College of the Atlantic (COA) developed a broad-based, interdisciplinary curriculum in ecological policy and community planning and decision-making that incorporates two primary computer-based tools: ARC/INFO Geographic Information System (GIS) and STELLA, a systems-dynamics modeling tool. Students learn how to use and apply these tools…
Computer Modeling and Research in the Classroom
ERIC Educational Resources Information Center
Ramos, Maria Joao; Fernandes, Pedro Alexandrino
2005-01-01
We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…
Computations of soot and and NO sub x emissions from gas turbine combustors
NASA Technical Reports Server (NTRS)
Srivatsa, S. K.
1982-01-01
An analytical program was conducted to compute the soot and NOx emissions from a combustor and the radiation heat transfer to the combustor walls. The program involved the formulation of an emission and radiation model and the incorporation of this model into the Garrett 3-D Combustor Perfomance Computer Program. Computations were performed for the idle, cruise, and take-off conditions of a JT8D can combustor. The predicted soot and NOx emissions and the radiation heat transfer to the combustor walls agree reasonably well with the limited experimental data available.
Analysis of the Harrier forebody/inlet design using computational techniques
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen
1993-01-01
Under the support of this Cooperative Agreement, computations of transonic flow past the complex forebody/inlet configuration of the AV-8B Harrier II have been performed. The actual aircraft configuration was measured and its surface and surrounding domain were defined using computational structured grids. The thin-layer Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-grid technique. A fully conservative, alternating direction implicit (ADI), approximately-factored, partially flux-split algorithm was employed to perform the computation. An existing code was altered to conform with the needs of the study, and some special engine face boundary conditions were developed. The algorithm incorporated the Chimera technique and an algebraic turbulence model in order to deal with the embedded multi-grids and viscous governing equations. Comparison with experimental data has yielded good agreement for the simplifications incorporated into the analysis. The aim of the present research was to provide a methodology for the numerical solution of complex, combined external/internal flows. This is the first time-dependent Navier-Stokes solution for a geometry in which the fuselage and inlet share a wall. The results indicate the methodology used here is a viable tool for transonic aircraft modeling.
Description of the NCAR Community Climate Model (CCM3). Technical note
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiehl, J.T.; Hack, J.J.; Bonan, G.B.
This repor presents the details of the governing equations, physical parameterizations, and numerical algorithms defining the version of the NCAR Community Climate Model designated CCM3. The material provides an overview of the major model components, and the way in which they interact as the numerical integration proceeds. This version of the CCM incorporates significant improvements to the physic package, new capabilities such as the incorporation of a slab ocean component, and a number of enhancements to the implementation (e.g., the ability to integrate the model on parallel distributed-memory computational platforms).
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
A scheduling model for the aerial relay system
NASA Technical Reports Server (NTRS)
Ausrotas, R. A.; Liu, E. W.
1980-01-01
The ability of the Aerial Relay System to handle the U.S. transcontinental large hub passenger flow was analyzed with a flexible, interactive computer model. The model incorporated city pair time of day demand and a demand allocation function which assigned passengers to their preferred flights.
COMPUTER PROGRAM DOCUMENTATION FOR THE ENHANCED STREAM WATER QUALITY MODEL QUAL2E
Presented in the manual are recent modifications and improvements to the widely used stream water quality model QUAL-II. Called QUAL2E, the enhanced model incorporates improvements in eight areas: (1) algal, nitrogen, phosphorus, and dissolved oxygen interactions; (2) algal growt...
Techniques for Computation of Frequency Limited H∞ Norm
NASA Astrophysics Data System (ADS)
Haider, Shafiq; Ghafoor, Abdul; Imran, Muhammad; Fahad Mumtaz, Malik
2018-01-01
Traditional H ∞ norm depicts peak system gain over infinite frequency range, but many applications like filter design, model order reduction and controller design etc. require computation of peak system gain over specific frequency interval rather than infinite range. In present work, new computationally efficient techniques for computation of H ∞ norm over frequency limited interval are proposed. Proposed techniques link norm computation with maximum singular value of the system in limited frequency interval. Numerical examples are incorporated to validate the proposed concept.
Ocean Tide Loading Computation
NASA Technical Reports Server (NTRS)
Agnew, Duncan Carr
2005-01-01
September 15,2003 through May 15,2005 This grant funds the maintenance, updating, and distribution of programs for computing ocean tide loading, to enable the corrections for such loading to be more widely applied in space- geodetic and gravity measurements. These programs, developed under funding from the CDP and DOSE programs, incorporate the most recent global tidal models developed from Topex/Poscidon data, and also local tide models for regions around North America; the design of the algorithm and software makes it straightforward to combine local and global models.
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
Computer Modeling of High-Intensity Cs-Sputter Ion Sources
NASA Astrophysics Data System (ADS)
Brown, T. A.; Roberts, M. L.; Southon, J. R.
The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.
Computation of turbulent reacting flow in a solid-propellant ducted rocket
NASA Astrophysics Data System (ADS)
Chao, Yei-Chin; Chou, Wen-Fuh; Liu, Sheng-Shyang
1995-05-01
A mathematical model for computation of turbulent reacting flows is developed under general curvilinear coordinate systems. An adaptive, streamline grid system is generated to deal with the complex flow structures in a multiple-inlet solid-propellant ducted rocket (SDR) combustor. General tensor representations of the k-epsilon and algebraic stress (ASM) turbulence models are derived in terms of contravariant velocity components, and modification caused by the effects of compressible turbulence is also included in the modeling. The clipped Gaussian probability density function is incorporated in the combustion model to account for fluctuations of properties. Validation of the above modeling is first examined by studying mixing and reacting characteristics in a confined coaxial-jet problem. This is followed by study of nonreacting and reacting SDR combustor flows. The results show that Gibson and Launder's ASM incorporated with Sarkar's modification for compressible turbulence effects based on the general curvilinear coordinate systems yields the most satisfactory prediction for this complicated SDR flowfield.
Computation of turbulent reacting flow in a solid-propellant ducted rocket
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Y.; Chou, W.; Liu, S.
1995-05-01
A mathematical model for computation of turbulent reacting flows is developed under general curvilinear coordinate systems. An adaptive, streamline grid system is generated to deal with the complex flow structures in a multiple-inlet solid-propellant ducted rocket (SDR) combustor. General tensor representations of the k-epsilon and algebraic stress (ASM) turbulence models are derived in terms of contravariant velocity components, and modification caused by the effects of compressible turbulence is also included in the modeling. The clipped Gaussian probability density function is incorporated in the combustion model to account for fluctuations of properties. Validation of the above modeling is first examined bymore » studying mixing and reacting characteristics in a confined coaxial-jet problem. This is followed by study of nonreacting and reacting SDR combustor flows. The results show that Gibson and Launder`s ASM incorporated with Sarkar`s modification for compressible turbulence effects based on the general curvilinear coordinate systems yields the most satisfactory prediction for this complicated SDR flowfield. 36 refs.« less
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Surrogate modeling of deformable joint contact using artificial neural networks.
Eskinazi, Ilan; Fregly, Benjamin J
2015-09-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks
Eskinazi, Ilan; Fregly, Benjamin J.
2016-01-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Perspectives for computational modeling of cell replacement for neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimone, James B.; Weick, Jason P.
In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impactmore » circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less
Perspectives for computational modeling of cell replacement for neurological disorders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aimone, James B.; Weick, Jason P.
Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behaviormore » in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less
Interface for the documentation and compilation of a library of computer models in physiology.
Summers, R. L.; Montani, J. P.
1994-01-01
A software interface for the documentation and compilation of a library of computer models in physiology was developed. The interface is an interactive program built within a word processing template in order to provide ease and flexibility of documentation. A model editor within the interface directs the model builder as to standardized requirements for incorporating models into the library and provides the user with an index to the levels of documentation. The interface and accompanying library are intended to facilitate model development, preservation and distribution and will be available for public use. PMID:7950046
A Study of the Efficacy of Project-Based Learning Integrated with Computer-Based Simulation--STELLA
ERIC Educational Resources Information Center
Eskrootchi, Rogheyeh; Oskrochi, G. Reza
2010-01-01
Incorporating computer-simulation modelling into project-based learning may be effective but requires careful planning and implementation. Teachers, especially, need pedagogical content knowledge which refers to knowledge about how students learn from materials infused with technology. This study suggests that students learn best by actively…
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
Orr, Mark G; Thrush, Roxanne; Plaut, David C
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.
Orr, Mark G.; Thrush, Roxanne; Plaut, David C.
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603
Precision Modeling Of Targets Using The VALUE Computer Program
NASA Astrophysics Data System (ADS)
Hoffman, George A.; Patton, Ronald; Akerman, Alexander
1989-08-01
The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
IDENTIFICATION OF AN IDEAL REACTOR MODEL IN A SECONDARY COMBUSTION CHAMBER
Tracer analysis was applied to a secondary combustion chamber of a rotary kiln incinerator simulator to develop a computationally inexpensive networked ideal reactor model and allow for the later incorporation of detailed reaction mechanisms. Tracer data from sulfur dioxide trace...
Computational Aeroelastic Modeling of Airframes and TurboMachinery: Progress and Challenges
NASA Technical Reports Server (NTRS)
Bartels, R. E.; Sayma, A. I.
2006-01-01
Computational analyses such as computational fluid dynamics and computational structural dynamics have made major advances toward maturity as engineering tools. Computational aeroelasticity is the integration of these disciplines. As computational aeroelasticity matures it too finds an increasing role in the design and analysis of aerospace vehicles. This paper presents a survey of the current state of computational aeroelasticity with a discussion of recent research, success and continuing challenges in its progressive integration into multidisciplinary aerospace design. This paper approaches computational aeroelasticity from the perspective of the two main areas of application: airframe and turbomachinery design. An overview will be presented of the different prediction methods used for each field of application. Differing levels of nonlinear modeling will be discussed with insight into accuracy versus complexity and computational requirements. Subjects will include current advanced methods (linear and nonlinear), nonlinear flow models, use of order reduction techniques and future trends in incorporating structural nonlinearity. Examples in which computational aeroelasticity is currently being integrated into the design of airframes and turbomachinery will be presented.
NASA Astrophysics Data System (ADS)
Glasa, J.; Valasek, L.; Weisenpacher, P.; Halada, L.
2013-02-01
Recent advances in computer fluid dynamics (CFD) and rapid increase of computational power of current computers have led to the development of CFD models capable to describe fire in complex geometries incorporating a wide variety of physical phenomena related to fire. In this paper, we demonstrate the use of Fire Dynamics Simulator (FDS) for cinema fire modelling. FDS is an advanced CFD system intended for simulation of the fire and smoke spread and prediction of thermal flows, toxic substances concentrations and other relevant parameters of fire. The course of fire in a cinema hall is described focusing on related safety risks. Fire properties of flammable materials used in the simulation were determined by laboratory measurements and validated by fire tests and computer simulations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1988-01-01
A one-equation turbulence model based on the turbulent kinetic energy equation is presented. The model is motivated by the success of the Johnson-King model and incorporates a number of features uncovered by Simpson's experiments on separated flows. Based on the results obtained, the model duplicates the success of algebraic models in attached flow regions and outperforms the two-equation models in detached flow regions.
Computer use in primary care practices in Canada
Anisimowicz, Yvonne; Bowes, Andrea E.; Thompson, Ashley E.; Miedema, Baukje; Hogg, William E.; Wong, Sabrina T.; Katz, Alan; Burge, Fred; Aubrey-Bassler, Kris; Yelland, Gregory S.; Wodchis, Walter P.
2017-01-01
Abstract Objective To examine the use of computers in primary care practices. Design The international Quality and Cost of Primary Care study was conducted in Canada in 2013 and 2014 using a descriptive cross-sectional survey method to collect data from practices across Canada. Participating practices filled out several surveys, one of them being the Family Physician Survey, from which this study collected its data. Setting All 10 Canadian provinces. Participants A total of 788 family physicians. Main outcome measures A computer use scale measured the extent to which family physicians integrated computers into their practices, with higher scores indicating a greater integration of computer use in practice. Analyses included t tests and 2 tests comparing new and traditional models of primary care on measures of computer use and electronic health record (EHR) use, as well as descriptive statistics. Results Nearly all (97.5%) physicians reported using a computer in their practices, with moderately high computer use scale scores (mean [SD] score of 5.97 [2.96] out of 9), and many (65.7%) reported using EHRs. Physicians with practices operating under new models of primary care reported incorporating computers into their practices to a greater extent (mean [SD] score of 6.55 [2.64]) than physicians operating under traditional models did (mean [SD] score of 5.33 [3.15]; t726.60 = 5.84; P < .001; Cohen d = 0.42, 95% CI 0.808 to 1.627) and were more likely to report using EHRs (73.8% vs 56.7%; χ12=25.43; P < .001; odds ratio = 2.15). Overall, there was a statistically significant variability in computer use across provinces. Conclusion Most family physicians in Canada have incorporated computers into their practices for administrative and scholarly activities; however, EHRs have not been adopted consistently across the country. Physicians with practices operating under the new, more collaborative models of primary care use computers more comprehensively and are more likely to use EHRs than those in practices operating under traditional models of primary care. PMID:28500211
Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.
Ray, Sarah; Valdovinos, Katie
2015-05-25
To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.
A computational model was developed to simulate aquifer remediation by pump and treat for a confined, perfectly stratified aquifer. plit-operator finite element numerical technique was utilized to incorporate flow field heterogeneity and nonequilibrium sorption into a two-dimensi...
2013-04-11
vehicle dynamics. Unclassified Unclassified Unclassified UU 9 Dr. Paramsothy Jayakumar (586) 282-4896 Computational Dynamics Inc. 0 Name of...Technical Representative Dr. Paramsothy Jayakumar TARDEC Computational Dynamics Inc. 1 Project Summary This project aims at addressing and...applications. This literature review is being summarized and incorporated into the paper. The commentary provided by Dr. Jayakumar was addressed and
Heat Transfer on a Flat Plate with Uniform and Step Temperature Distributions
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2005-01-01
Heat transfer associated with turbulent flow on a step-heated or cooled section of a flat plate at zero angle of attack with an insulated starting section was computationally modeled using the GASP Navier-Stokes code. The algebraic eddy viscosity model of Baldwin-Lomax and the turbulent two-equation models, the K- model and the Shear Stress Turbulent model (SST), were employed. The variations from uniformity of the imposed experimental temperature profile were incorporated in the computations. The computations yielded satisfactory agreement with the experimental results for all three models. The Baldwin- Lomax model showed the closest agreement in heat transfer, whereas the SST model was higher and the K-omega model was yet higher than the experiments. In addition to the step temperature distribution case, computations were also carried out for a uniformly heated or cooled plate. The SST model showed the closest agreement with the Von Karman analogy, whereas the K-omega model was higher and the Baldwin-Lomax was lower.
ERIC Educational Resources Information Center
Wick, Michael R.; Kleine, Patricia A.; Nelson, Andrew J.
2011-01-01
This article presents the development, testing, and application of an enrollment model. The model incorporates incoming freshman enrollment class size and historical persistence, transfer, and graduation rates to predict a six-year enrollment window and associated annual graduate production. The model predicts six-year enrollment to within 0.67…
ZIMOD: A Simple Computer Model of the Zimbabwean Economy.
ERIC Educational Resources Information Center
Knox, Jon; And Others
1988-01-01
This paper describes a rationale for the construction and use of a simple consistency model of the Zimbabwean economy that incorporates an input-output matrix. The model is designed to investigate alternative industrial strategies and their consequences for the balance of payments, consumption, and overall gross domestic product growth for a…
NASA Astrophysics Data System (ADS)
Mc Namara, Hugh A.; Pokrovskii, Alexei V.
2006-02-01
The Kaldor model-one of the first nonlinear models of macroeconomics-is modified to incorporate a Preisach nonlinearity. The new dynamical system thus created shows highly complicated behaviour. This paper presents a rigorous (computer aided) proof of chaos in this new model, and of the existence of unstable periodic orbits of all minimal periods p>57.
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
Comparative Modeling of Proteins: A Method for Engaging Students' Interest in Bioinformatics Tools
ERIC Educational Resources Information Center
Badotti, Fernanda; Barbosa, Alan Sales; Reis, André Luiz Martins; do Valle, Ítalo Faria; Ambrósio, Lara; Bitar, Mainá
2014-01-01
The huge increase in data being produced in the genomic era has produced a need to incorporate computers into the research process. Sequence generation, its subsequent storage, interpretation, and analysis are now entirely computer-dependent tasks. Universities from all over the world have been challenged to seek a way of encouraging students to…
Biological production models as elements of coupled, atmosphere-ocean models for climate research
NASA Technical Reports Server (NTRS)
Platt, Trevor; Sathyendranath, Shubha
1991-01-01
Process models of phytoplankton production are discussed with respect to their suitability for incorporation into global-scale numerical ocean circulation models. Exact solutions are given for integrals over the mixed layer and the day of analytic, wavelength-independent models of primary production. Within this class of model, the bias incurred by using a triangular approximation (rather than a sinusoidal one) to the variation of surface irradiance through the day is computed. Efficient computation algorithms are given for the nonspectral models. More exact calculations require a spectrally sensitive treatment. Such models exist but must be integrated numerically over depth and time. For these integrations, resolution in wavelength, depth, and time are considered and recommendations made for efficient computation. The extrapolation of the one-(spatial)-dimension treatment to large horizontal scale is discussed.
Toward improved calibration of watershed models: multisite many objective measures of information
USDA-ARS?s Scientific Manuscript database
This paper presents a computational framework for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. The framework consists of four components: (i) an a-priori characterization of system behavior; (ii) a formal an...
A computational model was developed to simulate aquifer remediation by pump and treat for a confined, perfectly stratified aquifer. A split-operator finite element numerical technique was utilized to incorporate flow field heterogeneity and nonequilibrium sorption into a two-dime...
Incorporating Uncertainty into Spacecraft Mission and Trajectory Design
NASA Astrophysics Data System (ADS)
Juliana D., Feldhacker
The complex nature of many astrodynamic systems often leads to high computational costs or degraded accuracy in the analysis and design of spacecraft missions, and the incorporation of uncertainty into the trajectory optimization process often becomes intractable. This research applies mathematical modeling techniques to reduce computational cost and improve tractability for design, optimization, uncertainty quantication (UQ) and sensitivity analysis (SA) in astrodynamic systems and develops a method for trajectory optimization under uncertainty (OUU). This thesis demonstrates the use of surrogate regression models and polynomial chaos expansions for the purpose of design and UQ in the complex three-body system. Results are presented for the application of the models to the design of mid-eld rendezvous maneuvers for spacecraft in three-body orbits. The models are shown to provide high accuracy with no a priori knowledge on the sample size required for convergence. Additionally, a method is developed for the direct incorporation of system uncertainties into the design process for the purpose of OUU and robust design; these methods are also applied to the rendezvous problem. It is shown that the models can be used for constrained optimization with orders of magnitude fewer samples than is required for a Monte Carlo approach to the same problem. Finally, this research considers an application for which regression models are not well-suited, namely UQ for the kinetic de ection of potentially hazardous asteroids under the assumptions of real asteroid shape models and uncertainties in the impact trajectory and the surface material properties of the asteroid, which produce a non-smooth system response. An alternate set of models is presented that enables analytic computation of the uncertainties in the imparted momentum from impact. Use of these models for a survey of asteroids allows conclusions to be drawn on the eects of an asteroid's shape on the ability to successfully divert the asteroid via kinetic impactor.
Roth, Christian J; Ismail, Mahmoud; Yoshihara, Lena; Wall, Wolfgang A
2017-01-01
In this article, we propose a comprehensive computational model of the entire respiratory system, which allows simulating patient-specific lungs under different ventilation scenarios and provides a deeper insight into local straining and stressing of pulmonary acini. We include novel 0D inter-acinar linker elements to respect the interplay between neighboring alveoli, an essential feature especially in heterogeneously distended lungs. The model is applicable to healthy and diseased patient-specific lung geometries. Presented computations in this work are based on a patient-specific lung geometry obtained from computed tomography data and composed of 60,143 conducting airways, 30,072 acini, and 140,135 inter-acinar linkers. The conducting airways start at the trachea and end before the respiratory bronchioles. The acini are connected to the conducting airways via terminal airways and to each other via inter-acinar linkers forming a fully coupled anatomically based respiratory model. Presented numerical examples include simulation of breathing during a spirometry-like test, measurement of a quasi-static pressure-volume curve using a supersyringe maneuver, and volume-controlled mechanical ventilation. The simulations show that our model incorporating inter-acinar dependencies successfully reproduces physiological results in healthy and diseased states. Moreover, within these scenarios, a deeper insight into local pressure, volume, and flow rate distribution in the human lung is investigated and discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Toe, Kyaw Kyar; Huang, Weimin; Yang, Tao; Duan, Yuping; Zhou, Jiayin; Su, Yi; Teo, Soo-Kng; Kumar, Selvaraj Senthil; Lim, Calvin Chi-Wan; Chui, Chee Kong; Chang, Stephen
2015-08-01
This work presents a surgical training system that incorporates cutting operation of soft tissue simulated based on a modified pre-computed linear elastic model in the Simulation Open Framework Architecture (SOFA) environment. A precomputed linear elastic model used for the simulation of soft tissue deformation involves computing the compliance matrix a priori based on the topological information of the mesh. While this process may require a few minutes to several hours, based on the number of vertices in the mesh, it needs only to be computed once and allows real-time computation of the subsequent soft tissue deformation. However, as the compliance matrix is based on the initial topology of the mesh, it does not allow any topological changes during simulation, such as cutting or tearing of the mesh. This work proposes a way to modify the pre-computed data by correcting the topological connectivity in the compliance matrix, without re-computing the compliance matrix which is computationally expensive.
Toxcast and the Use of Human Relevant In Vitro Exposures ...
The path for incorporating new approach methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. These challenges include sufficient coverage of toxicological mechanisms to meaningfully interpret negative test results, development of increasingly relevant test systems, computational modeling to integrate experimental data, putting results in a dose and exposure context, characterizing uncertainty, and efficient validation of the test systems and computational models. The presentation will cover progress at the U.S. EPA in systematically addressing each of these challenges and delivering more human-relevant risk-based assessments. This abstract does not necessarily reflect U.S. EPA policy. Presentation at the British Toxicological Society Annual Congress on ToxCast and the Use of Human Relevant In Vitro Exposures: Incorporating high-throughput exposure and toxicity testing data for 21st century risk assessments .
A novel representation of groundwater dynamics in large-scale land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan
2017-04-01
Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.
Spatial luminescence imaging of dopant incorporation in CdTe Films
Guthrey, Harvey; Moseley, John; Colegrove, Eric; ...
2017-01-25
State-of-the-art cathodoluminescence (CL) spectrum imaging with spectrum-per-pixel CL emission mapping is applied to spatially profile how dopant elements are incorporated into Cadmium telluride (CdTe). Emission spectra and intensity monitor the spatial distribution of additional charge carriers through characteristic variations in the CL emission based on computational modeling. Our results show that grain boundaries play a role in incorporating dopants in CdTe exposed to copper, phosphorus, and intrinsic point defects in CdTe. Furthermore, the image analysis provides critical, unique feedback to understand dopant incorporation and activation in the inhomogeneous CdTe material, which has struggled to reach high levels of hole density.
Tabulated Combustion Model Development For Non-Premixed Flames
NASA Astrophysics Data System (ADS)
Kundu, Prithwish
Turbulent non-premixed flames play a very important role in the field of engineering ranging from power generation to propulsion. The coupling of fluid mechanics and complicated combustion chemistry of fuels pose a challenge for the numerical modeling of these type of problems. Combustion modeling in Computational Fluid Dynamics (CFD) is one of the most important tools used for predictive modeling of complex systems and to understand the basic fundamentals of combustion. Traditional combustion models solve a transport equation of each species with a source term. In order to resolve the complex chemistry accurately it is important to include a large number of species. However, the computational cost is generally proportional to the cube of number of species. The presence of a large number of species in a flame makes the use of CFD computationally expensive and beyond reach for some applications or inaccurate when solved with simplified chemistry. For highly turbulent flows, it also becomes important to incorporate the effects of turbulence chemistry interaction (TCI). The aim of this work is to develop high fidelity combustion models based on the flamelet concept and to significantly advance the existing capabilities. A thorough investigation of existing models (Finite-rate chemistry and Representative Interactive Flamelet (RIF)) and comparative study of combustion models was done initially on a constant volume combustion chamber with diesel fuel injection. The CFD modeling was validated with experimental results and was also successfully applied to a single cylinder diesel engine. The effect of number of flamelets on the RIF model and flamelet initialization strategies were studied. The RIF model with multiple flamelets is computationally expensive and a model was proposed on the frame work of RIF. The new model was based on tabulated chemistry and incorporated TCI effects. A multidimensional tabulated chemistry database generation code was developed based on the 1D diffusion flame solver. The proposed model did not use progress variables like the traditional chemistry tabulation methods. The resulting model demonstrated an order of magnitude computational speed up over the RIF model. The results were validated across a wide range of operating conditions for diesel injections and the results were in close agreement to those of the experimental data. History of scalar dissipation rates plays a very important role in non premixed flames. However, tabulated methods have not been able to incorporate this physics in their models. A comparative approach is developed that can quantify these effects and find correlations with flow variables. A new model is proposed to include these effects in tabulated combustion models. The model is initially validated for 1D counterflow diffusion flame problems at engine conditions. The model is further implemented and validated in a 3D RANS code across a range of operating conditions for spray flames.
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
Capacity planning in a transitional economy: What issues? Which models?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubayi, V.; Leigh, R.W.; Bright, R.N.
1996-03-01
This paper is devoted to an exploration of the important issues facing the Russian power generation system and its evolution in the foreseeable future and the kinds of modeling approaches that capture those issues. These issues include, for example, (1) trade-offs between investments in upgrading and refurbishment of existing thermal (fossil-fired) capacity and safety enhancements in existing nuclear capacity versus investment in new capacity, (2) trade-offs between investment in completing unfinished (under construction) projects based on their original design versus investment in new capacity with improved design, (3) incorporation of demand-side management options (investments in enhancing end-use efficiency, for example)more » within the planning framework, (4) consideration of the spatial dimensions of system planning including investments in upgrading electric transmission networks or fuel shipment networks and incorporating hydroelectric generation, (5) incorporation of environmental constraints and (6) assessment of uncertainty and evaluation of downside risk. Models for exploring these issues include low power shutdown (LPS) which are computationally very efficient, though approximate, and can be used to perform extensive sensitivity analyses to more complex models which can provide more detailed answers but are computationally cumbersome and can only deal with limited issues. The paper discusses which models can usefully treat a wide range of issues within the priorities facing decision makers in the Russian power sector and integrate the results with investment decisions in the wider economy.« less
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Incorporation of RAM techniques into simulation modeling
NASA Astrophysics Data System (ADS)
Nelson, S. C., Jr.; Haire, M. J.; Schryver, J. C.
1995-01-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model to represent the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army's next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through 'what if' questions, sensitivity studies, and battle scenario changes.
Parr, W C H; Chamoli, U; Jones, A; Walsh, W R; Wroe, S
2013-01-04
Most modelling of whole bones does not incorporate trabecular geometry and treats bone as a solid non-porous structure. Some studies have modelled trabecular networks in isolation. One study has modelled the performance of whole human bones incorporating trabeculae, although this required considerable computer resources and purpose-written code. The difference between mechanical behaviour in models that incorporate trabecular geometry and non-porous models has not been explored. The ability to easily model trabecular networks may shed light on the mechanical consequences of bone loss in osteoporosis and remodelling after implant insertion. Here we present a Finite Element Analysis (FEA) of a human ankle bone that includes trabecular network geometry. We compare results from this model with results from non-porous models and introduce protocols achievable on desktop computers using widely available softwares. Our findings show that models including trabecular geometry are considerably stiffer than non-porous whole bone models wherein the non-cortical component has the same mass as the trabecular network, suggesting inclusion of trabecular geometry is desirable. We further present new methods for the construction and analysis of 3D models permitting: (1) construction of multi-property, non-porous models wherein cortical layer thickness can be manipulated; (2) maintenance of the same triangle network for the outer cortical bone surface in both 3D reconstruction and non-porous models allowing exact replication of load and restraint cases; and (3) creation of an internal landmark point grid allowing direct comparison between 3D FE Models (FEMs). Copyright © 2012 Elsevier Ltd. All rights reserved.
Crowd Simulation Incorporating Agent Psychological Models, Roles and Communication
2005-01-01
system (PMFserv) that implements human behavior models from a range of ability, stress, emotion , decision theoretic and motivation sources. An...autonomous agents, human behavior models, culture and emotions 1. Introduction There are many applications of computer animation and simulation where...We describe a new architecture to integrate a psychological model into a crowd simulation system in order to obtain believable emergent behaviors
A. Srivastava; J. Q. Wu; W. J. Elliot; E. S. Brooks; D. C. Flanagan
2017-01-01
The Water Erosion Prediction Project (WEPP) model was originally developed for hillslope and small watershed applications. Recent improvements to WEPP have led to enhanced computations for deep percolation, subsurface lateral flow, and frozen soil. In addition, the incorporation of channel routing has made the WEPP model well suited for large watersheds with perennial...
NASA Technical Reports Server (NTRS)
Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.
1994-01-01
The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.
Computational fluid dynamics combustion analysis evaluation
NASA Technical Reports Server (NTRS)
Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.
1992-01-01
This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.
Implications of a quadratic stream definition in radiative transfer theory.
NASA Technical Reports Server (NTRS)
Whitney, C.
1972-01-01
An explicit definition of the radiation-stream concept is stated and applied to approximate the integro-differential equation of radiative transfer with a set of twelve coupled differential equations. Computational efficiency is enhanced by distributing the corresponding streams in three-dimensional space in a totally symmetric way. Polarization is then incorporated in this model. A computer program based on the model is briefly compared with a Monte Carlo program for simulation of horizon scans of the earth's atmosphere. It is found to be considerably faster.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.
Technology and Online Education: Models for Change
ERIC Educational Resources Information Center
Cook, Catherine W.; Sonnenberg, Christian
2014-01-01
This paper contends that technology changes advance online education. A number of mobile computing and transformative technologies will be examined and incorporated into a descriptive study. The object of the study will be to design innovative mobile awareness models seeking to understand technology changes for mobile devices and how they can be…
Modeling Interactions in Small Groups
ERIC Educational Resources Information Center
Heise, David R.
2013-01-01
A new theory of interaction within small groups posits that group members initiate actions when tension mounts between the affective meanings of their situational identities and impressions produced by recent events. Actors choose partners and behaviors so as to reduce the tensions. A computer model based on this theory, incorporating reciprocal…
Demonstration of the Capabilities of the KINEROS2 – AGWA 3.0 Suite of Modeling Tools
This poster and computer demonstration illustrates a sampling of the wide range of applications that are possible using the KINEROS2 - AGWA suite of modeling tools. Applications include: 1) Incorporation of Low Impact Development (LID) features; 2) A real-time flash flood forecas...
Brain-computer interface with language model-electroencephalography fusion for locked-in syndrome.
Oken, Barry S; Orhan, Umut; Roark, Brian; Erdogmus, Deniz; Fowler, Andrew; Mooney, Aimee; Peters, Betts; Miller, Meghan; Fried-Oken, Melanie B
2014-05-01
Some noninvasive brain-computer interface (BCI) systems are currently available for locked-in syndrome (LIS) but none have incorporated a statistical language model during text generation. To begin to address the communication needs of individuals with LIS using a noninvasive BCI that involves rapid serial visual presentation (RSVP) of symbols and a unique classifier with electroencephalography (EEG) and language model fusion. The RSVP Keyboard was developed with several unique features. Individual letters are presented at 2.5 per second. Computer classification of letters as targets or nontargets based on EEG is performed using machine learning that incorporates a language model for letter prediction via Bayesian fusion enabling targets to be presented only 1 to 4 times. Nine participants with LIS and 9 healthy controls were enrolled. After screening, subjects first calibrated the system, and then completed a series of balanced word generation mastery tasks that were designed with 5 incremental levels of difficulty, which increased by selecting phrases for which the utility of the language model decreased naturally. Six participants with LIS and 9 controls completed the experiment. All LIS participants successfully mastered spelling at level 1 and one subject achieved level 5. Six of 9 control participants achieved level 5. Individuals who have incomplete LIS may benefit from an EEG-based BCI system, which relies on EEG classification and a statistical language model. Steps to further improve the system are discussed.
CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling
NASA Astrophysics Data System (ADS)
Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.
2012-12-01
The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.
ERIC Educational Resources Information Center
Finch, Harold L.; Tatham, Elaine L.
This document presents a modified cohort survival model which can be of use in making enrollment projections. The model begins by analytically profiling an area's residents. Each person's demographic characteristics--sex, age, place of residence--are recorded in the computer memory. Four major input variables are then incorporated into the model:…
Rapid prototyping of soil moisture estimates using the NASA Land Information System
NASA Astrophysics Data System (ADS)
Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.
2007-12-01
The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.
Durham, David P; Casman, Elizabeth A
2012-03-07
It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak.
Durham, David P.; Casman, Elizabeth A.
2012-01-01
It is anticipated that the next generation of computational epidemic models will simulate both infectious disease transmission and dynamic human behaviour change. Individual agents within a simulation will not only infect one another, but will also have situational awareness and a decision algorithm that enables them to modify their behaviour. This paper develops such a model of behavioural response, presenting a mathematical interpretation of a well-known psychological model of individual decision making, the health belief model, suitable for incorporation within an agent-based disease-transmission model. We formalize the health belief model and demonstrate its application in modelling the prevalence of facemask use observed over the course of the 2003 Hong Kong SARS epidemic, a well-documented example of behaviour change in response to a disease outbreak. PMID:21775324
Goode, D.J.; Konikow, Leonard F.
1989-01-01
The U.S. Geological Survey computer model of two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978) has been modified to incorporate the following types of chemical reactions: (1) first-order irreversible rate-reaction, such as radioactive decay; (2) reversible equilibrium-controlled sorption with linear, Freundlich, or Langmuir isotherms; and (3) reversible equilibrium-controlled ion exchange for monovalent or divalent ions. Numerical procedures are developed to incorporate these processes in the general solution scheme that uses method-of- characteristics with particle tracking for advection and finite-difference methods for dispersion. The first type of reaction is accounted for by an exponential decay term applied directly to the particle concentration. The second and third types of reactions are incorporated through a retardation factor, which is a function of concentration for nonlinear cases. The model is evaluated and verified by comparison with analytical solutions for linear sorption and decay, and by comparison with other numerical solutions for nonlinear sorption and ion exchange.
NASA Astrophysics Data System (ADS)
Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.
2010-12-01
Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foulk, James W.; Alleman, Coleman N.; Mota, Alejandro
The heterogeneity in mechanical fields introduced by microstructure plays a critical role in the localization of deformation. To resolve this incipient stage of failure, it is therefore necessary to incorporate microstructure with sufficient resolution. On the other hand, computational limitations make it infeasible to represent the microstructure in the entire domain at the component scale. In this study, the authors demonstrate the use of concurrent multi- scale modeling to incorporate explicit, finely resolved microstructure in a critical region while resolving the smoother mechanical fields outside this region with a coarser discretization to limit computational cost. The microstructural physics is modeledmore » with a high-fidelity model that incorporates anisotropic crystal elasticity and rate-dependent crystal plasticity to simulate the behavior of a stainless steel alloy. The component-scale material behavior is treated with a lower fidelity model incorporating isotropic linear elasticity and rate-independent J 2 plas- ticity. The microstructural and component scale subdomains are modeled concurrently, with coupling via the Schwarz alternating method, which solves boundary-value problems in each subdomain separately and transfers solution information between subdomains via Dirichlet boundary conditions. Beyond cases studies in concurrent multiscale, we explore progress in crystal plastic- ity through modular designs, solution methodologies, model verification, and extensions to Sierra/SM and manycore applications. Advances in conformal microstructures having both hexahedral and tetrahedral workflows in Sculpt and Cubit are highlighted. A structure-property case study in two-phase metallic composites applies the Materials Knowledge System to local metrics for void evolution. Discussion includes lessons learned, future work, and a summary of funded efforts and proposed work. Finally, an appendix illustrates the need for two-way coupling through a single degree of freedom.« less
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
Optimized mixed Markov models for motif identification
Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping
2006-01-01
Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929
NHEERL RESEARCH ON CARCINOGENIC CONTAMINANTS IN DRINKING WATER
Water research in the Environmental Carcinogenesis Division focuses on improved understanding of the mechanisms of mutagenesis and carcinogenesis of water contaminants for incorporation into human cancer risk assessment models. The program uses cellular , animal, and computer mo...
Finite difference time domain electromagnetic scattering from frequency-dependent lossy materials
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.
1991-01-01
Four different FDTD computer codes and companion Radar Cross Section (RCS) conversion codes on magnetic media are submitted. A single three dimensional dispersive FDTD code for both dispersive dielectric and magnetic materials was developed, along with a user's manual. The extension of FDTD to more complicated materials was made. The code is efficient and is capable of modeling interesting radar targets using a modest computer workstation platform. RCS results for two different plate geometries are reported. The FDTD method was also extended to computing far zone time domain results in two dimensions. Also the capability to model nonlinear materials was incorporated into FDTD and validated.
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Large-Eddy Simulation of Aeroacoustic Applications
NASA Technical Reports Server (NTRS)
Pruett, C. David; Sochacki, James S.
1999-01-01
This report summarizes work accomplished under a one-year NASA grant from NASA Langley Research Center (LaRC). The effort culminates three years of NASA-supported research under three consecutive one-year grants. The period of support was April 6, 1998, through April 5, 1999. By request, the grant period was extended at no-cost until October 6, 1999. Its predecessors have been directed toward adapting the numerical tool of large-eddy simulation (LES) to aeroacoustic applications, with particular focus on noise suppression in subsonic round jets. In LES, the filtered Navier-Stokes equations are solved numerically on a relatively coarse computational grid. Residual stresses, generated by scales of motion too small to be resolved on the coarse grid, are modeled. Although most LES incorporate spatial filtering, time-domain filtering affords certain conceptual and computational advantages, particularly for aeroacoustic applications. Consequently, this work has focused on the development of subgrid-scale (SGS) models that incorporate time-domain filters.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
Computational Modeling System for Deformation and Failure in Polycrystalline Metals
2009-03-29
FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in
Direct coal liquefaction baseline design and system analysis. Quarterly report, January--March 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-04-01
The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less
Direct coal liquefaction baseline design and system analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-04-01
The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less
OnGuard, a Computational Platform for Quantitative Kinetic Modeling of Guard Cell Physiology1[W][OA
Hills, Adrian; Chen, Zhong-Hua; Amtmann, Anna; Blatt, Michael R.; Lew, Virgilio L.
2012-01-01
Stomatal guard cells play a key role in gas exchange for photosynthesis while minimizing transpirational water loss from plants by opening and closing the stomatal pore. Foliar gas exchange has long been incorporated into mathematical models, several of which are robust enough to recapitulate transpirational characteristics at the whole-plant and community levels. Few models of stomata have been developed from the bottom up, however, and none are sufficiently generalized to be widely applicable in predicting stomatal behavior at a cellular level. We describe here the construction of computational models for the guard cell, building on the wealth of biophysical and kinetic knowledge available for guard cell transport, signaling, and homeostasis. The OnGuard software was constructed with the HoTSig library to incorporate explicitly all of the fundamental properties for transporters at the plasma membrane and tonoplast, the salient features of osmolite metabolism, and the major controls of cytosolic-free Ca2+ concentration and pH. The library engenders a structured approach to tier and interrelate computational elements, and the OnGuard software allows ready access to parameters and equations ‘on the fly’ while enabling the network of components within each model to interact computationally. We show that an OnGuard model readily achieves stability in a set of physiologically sensible baseline or Reference States; we also show the robustness of these Reference States in adjusting to changes in environmental parameters and the activities of major groups of transporters both at the tonoplast and plasma membrane. The following article addresses the predictive power of the OnGuard model to generate unexpected and counterintuitive outputs. PMID:22635116
A Computational Fluid Dynamic Model for a Novel Flash Ironmaking Process
NASA Astrophysics Data System (ADS)
Perez-Fontes, Silvia E.; Sohn, Hong Yong; Olivas-Martinez, Miguel
A computational fluid dynamic model for a novel flash ironmaking process based on the direct gaseous reduction of iron oxide concentrates is presented. The model solves the three-dimensional governing equations including both gas-phase and gas-solid reaction kinetics. The turbulence-chemistry interaction in the gas-phase is modeled by the eddy dissipation concept incorporating chemical kinetics. The particle cloud model is used to track the particle phase in a Lagrangian framework. A nucleation and growth kinetics rate expression is adopted to calculate the reduction rate of magnetite concentrate particles. Benchmark experiments reported in the literature for a nonreacting swirling gas jet and a nonpremixed hydrogen jet flame were simulated for validation. The model predictions showed good agreement with measurements in terms of gas velocity, gas temperature and species concentrations. The relevance of the computational model for the analysis of a bench reactor operation and the design of an industrial-pilot plant is discussed.
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.
2015-03-01
We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.
Modeling the complete Otto cycle: Preliminary version. [computer programming
NASA Technical Reports Server (NTRS)
Zeleznik, F. J.; Mcbride, B. J.
1977-01-01
A description is given of the equations and the computer program being developed to model the complete Otto cycle. The program incorporates such important features as: (1) heat transfer, (2) finite combustion rates, (3) complete chemical kinetics in the burned gas, (4) exhaust gas recirculation, and (5) manifold vacuum or supercharging. Changes in thermodynamic, kinetic and transport data as well as model parameters can be made without reprogramming. Preliminary calculations indicate that: (1) chemistry and heat transfer significantly affect composition and performance, (2) there seems to be a strong interaction among model parameters, and (3) a number of cycles must be calculated in order to obtain steady-state conditions.
Analysis of thermo-chemical nonequilibrium models for carbon dioxide flows
NASA Technical Reports Server (NTRS)
Rock, Stacey G.; Candler, Graham V.; Hornung, Hans G.
1992-01-01
The aerothermodynamics of thermochemical nonequilibrium carbon dioxide flows is studied. The chemical kinetics models of McKenzie and Park are implemented in separate three-dimensional computational fluid dynamics codes. The codes incorporate a five-species gas model characterized by a translational-rotational and a vibrational temperature. Solutions are obtained for flow over finite length elliptical and circular cylinders. The computed flowfields are then employed to calculate Mach-Zehnder interferograms for comparison with experimental data. The accuracy of the chemical kinetics models is determined through this comparison. Also, the methodology of the three-dimensional thermochemical nonequilibrium code is verified by the reproduction of the experiments.
A LabVIEW model incorporating an open-loop arterial impedance and a closed-loop circulatory system.
Cole, R T; Lucas, C L; Cascio, W E; Johnson, T A
2005-11-01
While numerous computer models exist for the circulatory system, many are limited in scope, contain unwanted features or incorporate complex components specific to unique experimental situations. Our purpose was to develop a basic, yet multifaceted, computer model of the left heart and systemic circulation in LabVIEW having universal appeal without sacrificing crucial physiologic features. The program we developed employs Windkessel-type impedance models in several open-loop configurations and a closed-loop model coupling a lumped impedance and ventricular pressure source. The open-loop impedance models demonstrate afterload effects on arbitrary aortic pressure/flow inputs. The closed-loop model catalogs the major circulatory waveforms with changes in afterload, preload, and left heart properties. Our model provides an avenue for expanding the use of the ventricular equations through closed-loop coupling that includes a basic coronary circuit. Tested values used for the afterload components and the effects of afterload parameter changes on various waveforms are consistent with published data. We conclude that this model offers the ability to alter several circulatory factors and digitally catalog the most salient features of the pressure/flow waveforms employing a user-friendly platform. These features make the model a useful instructional tool for students as well as a simple experimental tool for cardiovascular research.
Bayesian calibration for electrochemical thermal model of lithium-ion cells
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-07-01
Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.
Incorporation of Monitoring Systems to Model Irrigated Cotton at a Landscape Level
USDA-ARS?s Scientific Manuscript database
Advances in computer speed, industry IT core capabilities, and available soils and weather information have resulted in the need for “cropping system models” that address in detail the spatial and temporal water, energy and carbon balance of the system at a landscape scale. Many of these models have...
Incorporating Solid Modeling and Team-Based Design into Freshman Engineering Graphics.
ERIC Educational Resources Information Center
Buchal, Ralph O.
2001-01-01
Describes the integration of these topics through a major team-based design and computer aided design (CAD) modeling project in freshman engineering graphics at the University of Western Ontario. Involves n=250 students working in teams of four to design and document an original Lego toy. Includes 12 references. (Author/YDS)
NASA Technical Reports Server (NTRS)
1981-01-01
Relevant differences between the MPPM resident IBM 370computer and the NASA Sigma 9 computer are described as well as the MPPM system itself and its development. Problems encountered and solutions used to overcome these difficulties during installation of the MPPM system at MSFC are discussed. Remaining work on the installation effort is summarized. The relevant hardware features incorporated in the program are described and their implications on the transportability of the MPPM source code are examined.
Plontke, Stefan K; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N
2007-01-01
Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. Copyright 2007 S. Karger AG, Basel.
Plontke, Stefan K.; Siedow, Norbert; Wegener, Raimund; Zenner, Hans-Peter; Salt, Alec N.
2006-01-01
Hypothesis: Cochlear fluid pharmacokinetics can be better represented by three-dimensional (3D) finite-element simulations of drug dispersal. Background: Local drug deliveries to the round window membrane are increasingly being used to treat inner ear disorders. Crucial to the development of safe therapies is knowledge of drug distribution in the inner ear with different delivery methods. Computer simulations allow application protocols and drug delivery systems to be evaluated, and may permit animal studies to be extrapolated to the larger cochlea of the human. Methods: A finite-element 3D model of the cochlea was constructed based on geometric dimensions of the guinea pig cochlea. Drug propagation along and between compartments was described by passive diffusion. To demonstrate the potential value of the model, methylprednisolone distribution in the cochlea was calculated for two clinically relevant application protocols using pharmacokinetic parameters derived from a prior one-dimensional (1D) model. In addition, a simplified geometry was used to compare results from 3D with 1D simulations. Results: For the simplified geometry, calculated concentration profiles with distance were in excellent agreement between the 1D and the 3D models. Different drug delivery strategies produce very different concentration time courses, peak concentrations and basal-apical concentration gradients of drug. In addition, 3D computations demonstrate the existence of substantial gradients across the scalae in the basal turn. Conclusion: The 3D model clearly shows the presence of drug gradients across the basal scalae of guinea pigs, demonstrating the necessity of a 3D approach to predict drug movements across and between scalae with larger cross-sectional areas, such as the human, with accuracy. This is the first model to incorporate the volume of the spiral ligament and to calculate diffusion through this structure. Further development of the 3D model will have to incorporate a more accurate geometry of the entire inner ear and incorporate more of the specific processes that contribute to drug removal from the inner ear fluids. Appropriate computer models may assist in both drug and drug delivery system design and can thus accelerate the development of a rationale-based local drug delivery to the inner ear and its successful establishment in clinical practice. PMID:17119332
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Application of bayesian networks to real-time flood risk estimation
NASA Astrophysics Data System (ADS)
Garrote, L.; Molina, M.; Blasco, G.
2003-04-01
This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models
Robust Flutter Margin Analysis that Incorporates Flight Data
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Martin J.
1998-01-01
An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Z.; Bessa, M. A.; Liu, W.K.
A predictive computational theory is shown for modeling complex, hierarchical materials ranging from metal alloys to polymer nanocomposites. The theory can capture complex mechanisms such as plasticity and failure that span across multiple length scales. This general multiscale material modeling theory relies on sound principles of mathematics and mechanics, and a cutting-edge reduced order modeling method named self-consistent clustering analysis (SCA) [Zeliang Liu, M.A. Bessa, Wing Kam Liu, “Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,” Comput. Methods Appl. Mech. Engrg. 306 (2016) 319–341]. SCA reduces by several orders of magnitude the computational cost of micromechanical andmore » concurrent multiscale simulations, while retaining the microstructure information. This remarkable increase in efficiency is achieved with a data-driven clustering method. Computationally expensive operations are performed in the so-called offline stage, where degrees of freedom (DOFs) are agglomerated into clusters. The interaction tensor of these clusters is computed. In the online or predictive stage, the Lippmann-Schwinger integral equation is solved cluster-wise using a self-consistent scheme to ensure solution accuracy and avoid path dependence. To construct a concurrent multiscale model, this scheme is applied at each material point in a macroscale structure, replacing a conventional constitutive model with the average response computed from the microscale model using just the SCA online stage. A regularized damage theory is incorporated in the microscale that avoids the mesh and RVE size dependence that commonly plagues microscale damage calculations. The SCA method is illustrated with two cases: a carbon fiber reinforced polymer (CFRP) structure with the concurrent multiscale model and an application to fatigue prediction for additively manufactured metals. For the CFRP problem, a speed up estimated to be about 43,000 is achieved by using the SCA method, as opposed to FE2, enabling the solution of an otherwise computationally intractable problem. The second example uses a crystal plasticity constitutive law and computes the fatigue potency of extrinsic microscale features such as voids. This shows that local stress and strain are capture sufficiently well by SCA. This model has been incorporated in a process-structure-properties prediction framework for process design in additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less
Clarke, M G; Kennedy, K P; MacDonagh, R P
2009-01-01
To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.
Stochastic and Deterministic Approaches to Gas-grain Modeling of Interstellar Sources
NASA Astrophysics Data System (ADS)
Vasyunin, Anton; Herbst, Eric; Caselli, Paola
During the last decade, our understanding of the chemistry on surfaces of interstellar grains has been significantly enchanced. Extensive laboratory studies have revealed complex structure and dynamics in interstellar ice analogues, thus making our knowledge much more detailed. In addition, the first qualitative investigations of new processes were made, such as non-thermal chemical desorption of species from dust grains into the gas. Not surprisingly, the rapid growth of knowledge about the physics and chemistry of interstellar ices led to the development of a new generation of astrochemical models. The models are typically characterized by more detailed treatments of the ice physics and chemistry than previously. The utilized numerical approaches vary greatly from microscopic models, in which every single molecule is traced, to ``mean field'' macroscopic models, which simulate the evolution of averaged characteristics of interstellar ices, such as overall bulk composition. While microscopic models based on a stochastic Monte Carlo approach are potentially able to simulate the evolution of interstellar ices with an account of most subtle effects found in a laboratory, their use is often impractical due to limited knowledge about star-forming regions and huge computational demands. On the other hand, deterministic macroscopic models that often utilize kinetic rate equations are computationally efficient but experience difficulties in incorporation of such potentially important effects as ice segregation or discreteness of surface chemical reactions. In my talk, I will review the state of the art in the development of gas-grain astrochemical models. I will discuss how to incorporate key features of ice chemistry and dynamics in the gas-grain astrochemical models, and how the incorporation of recent laboratory findings into gas-grain models helps to better match observations.
O'Donnell, Michael
2015-01-01
State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.
Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping
2018-01-01
Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.
Semi-Empirical Modeling of SLD Physics
NASA Technical Reports Server (NTRS)
Wright, William B.; Potapczuk, Mark G.
2004-01-01
The effects of supercooled large droplets (SLD) in icing have been an area of much interest in recent years. As part of this effort, the assumptions used for ice accretion software have been reviewed. A literature search was performed to determine advances from other areas of research that could be readily incorporated. Experimental data in the SLD regime was also analyzed. A semi-empirical computational model is presented which incorporates first order physical effects of large droplet phenomena into icing software. This model has been added to the LEWICE software. Comparisons are then made to SLD experimental data that has been collected to date. Results will be presented for the comparison of water collection efficiency, ice shape and ice mass.
Numerical study of combustion processes in afterburners
NASA Technical Reports Server (NTRS)
Zhou, Xiaoqing; Zhang, Xiaochun
1986-01-01
Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.
NASA Astrophysics Data System (ADS)
Markina, A.; Ivanov, V.; Komarov, P.; Khokhlov, A.; Tung, S.-H.
2016-11-01
We propose a coarse-grained model for studying the effects of adding bile salt to lecithin organosols by means of computer simulation. This model allows us to reveal the mechanisms of experimentally observed increasing of viscosity upon increasing the bile salt concentration. We show that increasing the bile salt to lecithin molar ratio induces the growth of elongated micelles of ellipsoidal and cylindrical shape due to incorporation of disklike bile salt molecules. These wormlike micelles can entangle into transient network displaying perceptible viscoelastic properties.
Frictionless contact of aircraft tires
NASA Technical Reports Server (NTRS)
Kim, Kyun O.; Tanner, John A.; Noor, Ahmed K.
1989-01-01
A computational procedure for the solution of frictionless contact problems of spacecraft tires was developed using a two-dimensional laminated anisotropic shell theory incorporating the effects of variations in material and geometric parameters, transverse shear deformation, and geometric nonlinearities to model the nose-gear tire of a space shuttle. Numerical results are presented for the case when the nose-gear tire is subjected to inflation pressure and pressed against a rigid pavement. The results are compared with experimental results obtained at NASA Langley, demonstrating a high accuracy of the model and the effectiveness of the computational procedure.
Computationally modeling interpersonal trust.
Lee, Jin Joo; Knox, W Bradley; Wormwood, Jolie B; Breazeal, Cynthia; Desteno, David
2013-01-01
We present a computational model capable of predicting-above human accuracy-the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.
Ray, Sarah; Valdovinos, Katie
Pharmacy students should be exposed to and offered opportunities to practice the skill of incorporating a computer into a patient interview in the didactic setting. Faculty sought to improve retention of student ability to incorporate computers into their patient-pharmacist communication. Students were required to utilize a computer to document clinical information gathered during a simulated patient encounter (SPE). Students utilized electronic worksheets and were evaluated by instructors on their ability to effectively incorporate a computer into a SPE using a rubric. Students received specific instruction on effective computer use during patient encounters. Students were then re-evaluated by an instructor during subsequent SPEs of increasing complexity using standardized rubrics blinded from the students. Pre-instruction, 45% of students effectively incorporated a computer into a SPE. After receiving instruction, 67% of students were effective in their use of a computer during a SPE of performing a pharmaceutical care assessment for a patient with chronic obstructive pulmonary disease (COPD) (p < 0.05 compared to pre-instruction), and 58% of students were effective in their use of a computer during a SPE of retrieving a medication list and social history from a simulated alcohol-impaired patient (p = 0.087 compared to pre-instruction). Instruction can improve pharmacy students' ability to incorporate a computer into SPEs, a critical skill in building and maintaining rapport with patients and improving efficiency of patient visits. Complex encounters may affect students' ability to utilize a computer appropriately. Students may benefit from repeated practice with this skill, especially with SPEs of increasing complexity. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhao, Lei; Gossmann, Toni I; Waxman, David
2016-03-21
The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling driver behavior in a cognitive architecture.
Salvucci, Dario D
2006-01-01
This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.
Knowledge-Based Environmental Context Modeling
NASA Astrophysics Data System (ADS)
Pukite, P. R.; Challou, D. J.
2017-12-01
As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient use of our renewable natural resources. [1] C2M2L (Component, Context, and Manufacturing Model Library) Final Report, https://doi.org/10.13140/RG.2.1.4956.3604
A math model for high velocity sensoring with a focal plane shuttered camera.
NASA Technical Reports Server (NTRS)
Morgan, P.
1971-01-01
A new mathematical model is presented which describes the image produced by a focal plane shutter-equipped camera. The model is based upon the well-known collinearity condition equations and incorporates both the translational and rotational motion of the camera during the exposure interval. The first differentials of the model with respect to exposure interval, delta t, yield the general matrix expressions for image velocities which may be simplified to known cases. The exposure interval, delta t, may be replaced under certain circumstances with a function incorporating blind velocity and image position if desired. The model is tested using simulated Lunar Orbiter data and found to be computationally stable as well as providing excellent results, provided that some external information is available on the velocity parameters.
OpenWorm: an open-science approach to modeling Caenorhabditis elegans.
Szigeti, Balázs; Gleeson, Padraig; Vella, Michael; Khayrulin, Sergey; Palyanov, Andrey; Hokanson, Jim; Currie, Michael; Cantarelli, Matteo; Idili, Giovanni; Larson, Stephen
2014-01-01
OpenWorm is an international collaboration with the aim of understanding how the behavior of Caenorhabditis elegans (C. elegans) emerges from its underlying physiological processes. The project has developed a modular simulation engine to create computational models of the worm. The modularity of the engine makes it possible to easily modify the model, incorporate new experimental data and test hypotheses. The modeling framework incorporates both biophysical neuronal simulations and a novel fluid-dynamics-based soft-tissue simulation for physical environment-body interactions. The project's open-science approach is aimed at overcoming the difficulties of integrative modeling within a traditional academic environment. In this article the rationale is presented for creating the OpenWorm collaboration, the tools and resources developed thus far are outlined and the unique challenges associated with the project are discussed.
The importance of structural anisotropy in computational models of traumatic brain injury.
Carlsen, Rika W; Daphalapurkar, Nitin P
2015-01-01
Understanding the mechanisms of injury might prove useful in assisting the development of methods for the management and mitigation of traumatic brain injury (TBI). Computational head models can provide valuable insight into the multi-length-scale complexity associated with the primary nature of diffuse axonal injury. It involves understanding how the trauma to the head (at the centimeter length scale) translates to the white-matter tissue (at the millimeter length scale), and even further down to the axonal-length scale, where physical injury to axons (e.g., axon separation) may occur. However, to accurately represent the development of TBI, the biofidelity of these computational models is of utmost importance. There has been a focused effort to improve the biofidelity of computational models by including more sophisticated material definitions and implementing physiologically relevant measures of injury. This paper summarizes recent computational studies that have incorporated structural anisotropy in both the material definition of the white matter and the injury criterion as a means to improve the predictive capabilities of computational models for TBI. We discuss the role of structural anisotropy on both the mechanical response of the brain tissue and on the development of injury. We also outline future directions in the computational modeling of TBI.
Multi-dimensional modelling of gas turbine combustion using a flame sheet model in KIVA II
NASA Technical Reports Server (NTRS)
Cheng, W. K.; Lai, M.-C.; Chue, T.-H.
1991-01-01
A flame sheet model for heat release is incorporated into a multi-dimensional fluid mechanical simulation for gas turbine application. The model assumes that the chemical reaction takes place in thin sheets compared to the length scale of mixing, which is valid for the primary combustion zone in a gas turbine combustor. In this paper, the details of the model are described and computational results are discussed.
Manufacturing Magic and Computational Creativity
Williams, Howard; McOwan, Peter W.
2016-01-01
This paper describes techniques in computational creativity, blending mathematical modeling and psychological insight, to generate new magic tricks. The details of an explicit computational framework capable of creating new magic tricks are summarized, and evaluated against a range of contemporary theories about what constitutes a creative system. To allow further development of the proposed system we situate this approach to the generation of magic in the wider context of other areas of application in computational creativity in performance arts. We show how approaches in these domains could be incorporated to enhance future magic generation systems, and critically review possible future applications of such magic generating computers. PMID:27375533
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
NASA Astrophysics Data System (ADS)
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.
2017-04-01
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H+/OH- transport) and electric field-driven migration on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Overall, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.
Howell, Bryan; McIntyre, Cameron C
2016-06-01
Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
The American Dream: A Crossover of Community Imagery.
ERIC Educational Resources Information Center
Metelka, Charles J.
Even as conceptual models, distinctions between "rural" and "urban" have become blurred--by changes in transportation, telecommunications, computer technology, business expertise, formal education, health care, and citizenry expectations/knowledge. Two typologies describing future trends and incorporating changes in rural/urban…
Making Sense of the Data from Complex Assessments.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Steinberg, Linda S.; Breyer, F. Jay; Almond, Russell G.; Johnson, Lynn
2002-01-01
Presents a design framework that incorporates integrated structures for modeling knowledge and skills, designing tasks, and extracting and synthesizing evidence. Illustrates these ideas in the context of a project that assesses problem solving in dental hygiene through computer-based simulations. (SLD)
Space Environments and Effects: Trapped Proton Model
NASA Technical Reports Server (NTRS)
Huston, S. L.; Kauffman, W. (Technical Monitor)
2002-01-01
An improved model of the Earth's trapped proton environment has been developed. This model, designated Trapped Proton Model version 1 (TPM-1), determines the omnidirectional flux of protons with energy between 1 and 100 MeV throughout near-Earth space. The model also incorporates a true solar cycle dependence. The model consists of several data files and computer software to read them. There are three versions of the mo'del: a FORTRAN-Callable library, a stand-alone model, and a Web-based model.
Aeroelastic Analysis for Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, W.
1982-01-01
Aeroelastic-analysis computer program incorporates an analytical model of aeroelastic behavior of wide range of rotorcraft. Such an analytical model is desirable for both pretest predictions and posttest correlations. Program can be applied in investigations of isolated rotor aeroelasticity and helicopter-flight dynamics and could be employed as basis for more-extensive investigations or aeroelastic behavior, such as automatic control system design.
Faculty Flow in a Medical School: A Policy Simulator. AIR Forum 1979 Paper.
ERIC Educational Resources Information Center
Kutina, Kenneth L.; Bruss, Edward A.
A computer-based simulation model is described that can be used in an interactive mode to analyze the effects of alternative hiring, promotion, tenure granting, retirement, and salary policies on faculty size, distribution, and aggregate salary expense. The model was designed to be adequately flexible and comprehensive to incorporate the array of…
NASA Technical Reports Server (NTRS)
1976-01-01
Assumptions made and techniques used in modeling the power network to the 480 volt level are discussed. Basic computational techniques used in the short circuit program are described along with a flow diagram of the program and operational procedures. Procedures for incorporating network changes are included in this user's manual.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V
2017-04-01
Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.
Generalized Advanced Propeller Analysis System (GAPAS). Volume 2: Computer program user manual
NASA Technical Reports Server (NTRS)
Glatt, L.; Crawford, D. R.; Kosmatka, J. B.; Swigart, R. J.; Wong, E. W.
1986-01-01
The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
Thermal Aspects of Lithium Ion Cells
NASA Technical Reports Server (NTRS)
Frank, H.; Shakkottai, P.; Bugga, R.; Smart, M.; Huang, C. K.; Timmerman, P.; Surampudi, S.
2000-01-01
This viewgraph presentation outlines the development of a thermal model of Li-ion cells in terms of heat generation, thermal mass, and thermal resistance. Intended for incorporation into battery model. The approach was to estimate heat generation: with semi-theoretical model, and then to check accuracy with efficiency measurements. Another objective was to compute thermal mass from component weights and specific heats, and to compute the thermal resistance from component dimensions and conductivities. Two lithium batteries are compared, the Cylindrical lithium battery, and the prismatic lithium cell. It reviews methodology for estimating the heat generation rate. Graphs of the Open-circuit curves of the cells and the heat evolution during discharge are given.
Arenas, Miguel
2015-04-01
NGS technologies present a fast and cheap generation of genomic data. Nevertheless, ancestral genome inference is not so straightforward due to complex evolutionary processes acting on this material such as inversions, translocations, and other genome rearrangements that, in addition to their implicit complexity, can co-occur and confound ancestral inferences. Recently, models of genome evolution that accommodate such complex genomic events are emerging. This letter explores these novel evolutionary models and proposes their incorporation into robust statistical approaches based on computer simulations, such as approximate Bayesian computation, that may produce a more realistic evolutionary analysis of genomic data. Advantages and pitfalls in using these analytical methods are discussed. Potential applications of these ancestral genomic inferences are also pointed out.
NGScloud: RNA-seq analysis of non-model species using cloud computing.
Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai
2018-05-03
RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.
Modeling the Impact of Motivation, Personality, and Emotion on Social Behavior
NASA Astrophysics Data System (ADS)
Miller, Lynn C.; Read, Stephen J.; Zachary, Wayne; Rosoff, Andrew
Models seeking to predict human social behavior must contend with multiple sources of individual and group variability that underlie social behavior. One set of interrelated factors that strongly contribute to that variability - motivations, personality, and emotions - has been only minimally incorporated in previous computational models of social behavior. The Personality, Affect, Culture (PAC) framework is a theory-based computational model that addresses this gap. PAC is used to simulate social agents whose social behavior varies according to their personalities and emotions, which, in turn, vary according to their motivations and underlying motive control parameters. Examples involving disease spread and counter-insurgency operations show how PAC can be used to study behavioral variability in different social contexts.
Mathematical and Computational Modeling for Tumor Virotherapy with Mediated Immunity.
Timalsina, Asim; Tian, Jianjun Paul; Wang, Jin
2017-08-01
We propose a new mathematical modeling framework based on partial differential equations to study tumor virotherapy with mediated immunity. The model incorporates both innate and adaptive immune responses and represents the complex interaction among tumor cells, oncolytic viruses, and immune systems on a domain with a moving boundary. Using carefully designed computational methods, we conduct extensive numerical simulation to the model. The results allow us to examine tumor development under a wide range of settings and provide insight into several important aspects of the virotherapy, including the dependence of the efficacy on a few key parameters and the delay in the adaptive immunity. Our findings also suggest possible ways to improve the virotherapy for tumor treatment.
An EMTP system level model of the PMAD DC test bed
NASA Technical Reports Server (NTRS)
Dravid, Narayan V.; Kacpura, Thomas J.; Tam, Kwa-Sur
1991-01-01
A power management and distribution direct current (PMAD DC) test bed was set up at the NASA Lewis Research Center to investigate Space Station Freedom Electric Power Systems issues. Efficiency of test bed operation significantly improves with a computer simulation model of the test bed as an adjunct tool of investigation. Such a model is developed using the Electromagnetic Transients Program (EMTP) and is available to the test bed developers and experimenters. The computer model is assembled on a modular basis. Device models of different types can be incorporated into the system model with only a few lines of code. A library of the various model types is created for this purpose. Simulation results and corresponding test bed results are presented to demonstrate model validity.
NASA Astrophysics Data System (ADS)
Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.
2016-09-01
We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.
MCAID--A Generalized Text Driver.
ERIC Educational Resources Information Center
Ahmed, K.; Dickinson, C. J.
MCAID is a relatively machine-independent technique for writing computer-aided instructional material consisting of descriptive text, multiple choice questions, and the ability to call compiled subroutines to perform extensive calculations. It was specially developed to incorporate test-authoring around complex mathematical models to explore a…
MOVES (MOTOR VEHICLE EMISSION SIMULATOR) MODEL ...
A computer model, intended to eventually replace the MOBILE model and to incorporate the NONROAD model, that will provide the ability to estimate criteria and toxic air pollutant emission factors and emission inventories that are specific to the areas and time periods of interest, at scales ranging from local to national. Development of a new emission factor and inventory model for mobile source emissions. The model will be used by air pollution modelers within EPA, and at the State and local levels.
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Hamilton, H. H., II
1983-01-01
A computer code HALIS, designed to compute the three dimensional flow about shuttle like configurations at angles of attack greater than 25 deg, is described. Results from HALIS are compared where possible with an existing flow field code; such comparisons show excellent agreement. Also, HALIS results are compared with experimental pressure distributions on shuttle models over a wide range of angle of attack. These comparisons are excellent. It is demonstrated that the HALIS code can incorporate equilibrium air chemistry in flow field computations.
NASA Astrophysics Data System (ADS)
Orr, C. H.; Mcfadden, R. R.; Manduca, C. A.; Kempler, L. A.
2016-12-01
Teaching with data, simulations, and models in the geosciences can increase many facets of student success in the classroom, and in the workforce. Teaching undergraduates about programming and improving students' quantitative and computational skills expands their perception of Geoscience beyond field-based studies. Processing data and developing quantitative models are critically important for Geoscience students. Students need to be able to perform calculations, analyze data, create numerical models and visualizations, and more deeply understand complex systems—all essential aspects of modern science. These skills require students to have comfort and skill with languages and tools such as MATLAB. To achieve comfort and skill, computational and quantitative thinking must build over a 4-year degree program across courses and disciplines. However, in courses focused on Geoscience content it can be challenging to get students comfortable with using computational methods to answers Geoscience questions. To help bridge this gap, we have partnered with MathWorks to develop two workshops focused on collecting and developing strategies and resources to help faculty teach students to incorporate data, simulations, and models into the curriculum at the course and program levels. We brought together faculty members from the sciences, including Geoscience and allied fields, who teach computation and quantitative thinking skills using MATLAB to build a resource collection for teaching. These materials, and the outcomes of the workshops are freely available on our website. The workshop outcomes include a collection of teaching activities, essays, and course descriptions that can help faculty incorporate computational skills at the course or program level. The teaching activities include in-class assignments, problem sets, labs, projects, and toolboxes. These activities range from programming assignments to creating and using models. The outcomes also include workshop syntheses that highlights best practices, a set of webpages to support teaching with software such as MATLAB, and an interest group actively discussing aspects these issues in Geoscience and allied fields. Learn more and view the resources at http://serc.carleton.edu/matlab_computation2016/index.html
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1989-01-01
The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.
Incorporating linguistic knowledge for learning distributed word representations.
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.
Incorporating Linguistic Knowledge for Learning Distributed Word Representations
Wang, Yan; Liu, Zhiyuan; Sun, Maosong
2015-01-01
Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581
NASA Technical Reports Server (NTRS)
Cushman, Paula P.
1993-01-01
Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.
Bockrath, Richard; Person, Stanley; Funk, Fred
1968-01-01
Transmutation of the radioisotope tritium occurs with the production of a low energy electron, having a range in biological material similar to the dimensions of a bacterium. A computer program was written to determine the radiation dose distributions which may be expected within a bacterium as a result of tritium decay, when the isotope has been incorporated into specific regions of the bacterium. A nonspherical model bacterium was used, represented by a cylinder with hemispherical ends. The energy distributions resulting from a wide variety of simulated labeled regions were determined; the results suggested that the nuclear region of a bacterium receives on the average significantly different per decay doses, if the labeled regions were those conceivably produced by the incorporation of thymidine-3H, uracil-3H, or 3H-amino acids. Energy distributions in the model bacterium were also calculated for the decay of incorporated 14carbon, 35sulfur, and 32phosphorous. PMID:5678319
Temperature dependent nonlinear metal matrix laminae behavior
NASA Technical Reports Server (NTRS)
Barrett, D. J.; Buesking, K. W.
1986-01-01
An analytical method is described for computing the nonlinear thermal and mechanical response of laminated plates. The material model focuses upon the behavior of metal matrix materials by relating the nonlinear composite response to plasticity effects in the matrix. The foundation of the analysis is the unidirectional material model which is used to compute the instantaneous properties of the lamina based upon the properties of the fibers and matrix. The unidirectional model assumes that the fibers properties are constant with temperature and assumes that the matrix can be modelled as a temperature dependent, bilinear, kinematically hardening material. An incremental approach is used to compute average stresses in the fibers and matrix caused by arbitrary mechanical and thermal loads. The layer model is incorporated in an incremental laminated plate theory to compute the nonlinear response of laminated metal matrix composites of general orientation and stacking sequence. The report includes comparisons of the method with other analytical approaches and compares theoretical calculations with measured experimental material behavior. A section is included which describes the limitations of the material model.
Formal Techniques for Synchronized Fault-Tolerant Systems
NASA Technical Reports Server (NTRS)
DiVito, Ben L.; Butler, Ricky W.
1992-01-01
We present the formal verification of synchronizing aspects of the Reliable Computing Platform (RCP), a fault-tolerant computing system for digital flight control applications. The RCP uses NMR-style redundancy to mask faults and internal majority voting to purge the effects of transient faults. The system design has been formally specified and verified using the EHDM verification system. Our formalization is based on an extended state machine model incorporating snapshots of local processors clocks.
Comparison of liquid rocket engine base region heat flux computations using three turbulence models
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Griffith, Dwaine O., II; Prendergast, Maurice J.; Seaford, C. M.
1993-01-01
The flow in the base region of launch vehicles is characterized by flow separation, flow reversals, and reattachment. Computation of the convective heat flux in the base region and on the nozzle external surface of Space Shuttle Main Engine and Space Transportation Main Engine (STME) is an important part of defining base region thermal environments. Several turbulence models were incorporated in a CFD code and validated for flow and heat transfer computations in the separated and reattaching regions associated with subsonic and supersonic flows over backward facing steps. Heat flux computations in the base region of a single STME engine and a single S1C engine were performed using three different wall functions as well as a renormalization-group based k-epsilon model. With the very limited data available, the computed values are seen to be of the right order of magnitude. Based on the validation comparisons, it is concluded that all the turbulence models studied have predicted the reattachment location and the velocity profiles at various axial stations downstream of the step very well.
NASA Technical Reports Server (NTRS)
Richey, Edward, III
1995-01-01
This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Spin wave Feynman diagram vertex computation package
NASA Astrophysics Data System (ADS)
Price, Alexander; Javernick, Philip; Datta, Trinanjan
Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.
ERIC Educational Resources Information Center
Rodriguez, Luis J.; Torres, M. Ines
2006-01-01
Previous works in English have revealed that disfluencies follow regular patterns and that incorporating them into the language model of a speech recognizer leads to lower perplexities and sometimes to a better performance. Although work on disfluency modeling has been applied outside the English community (e.g., in Japanese), as far as we know…
ERIC Educational Resources Information Center
Hati, Sanchita; Bhattacharyya, Sudeep
2016-01-01
A project-based biophysical chemistry laboratory course, which is offered to the biochemistry and molecular biology majors in their senior year, is described. In this course, the classroom study of the structure-function of biomolecules is integrated with the discovery-guided laboratory study of these molecules using computer modeling and…
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
NASA Astrophysics Data System (ADS)
Son, Kwon Joong
2018-02-01
Hindering particle agglomeration and re-dispersion processes, gravitational sedimentation of suspended particles in magnetorheological (MR) fluids causes inferior performance and controllability of MR fluids in response to a user-specified magnetic field. Thus, suspension stability is one of the principal factors to be considered in synthesizing MR fluids. However, only a few computational studies have been reported so far on the sedimentation characteristics of suspended particles under gravity. In this paper, the settling dynamics of paramagnetic particles suspended in MR fluids was investigated via discrete element method (DEM) simulations. This work focuses particularly on developing accurate fluid-particle and particle-particle interaction models which can account for the influence of stabilizing surfactants on the MR fluid sedimentation. Effect of the stabilizing surfactants on interparticle interactions was incorporated into the derivation of a reliable contact-impact model for DEM computation. Also, the influence of the stabilizing additives on fluid-particle interactions was considered by incorporating Stokes drag with shape and wall correction factors into DEM formulation. The results of simulations performed for model validation purposes showed a good agreement with the published sedimentation measurement data in terms of an initial sedimentation velocity and a final sedimentation ratio.
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-04-30
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
Comparative Petrographic Maturity of River and Beach Sand, and Origin of Quartz Arenites.
ERIC Educational Resources Information Center
Ferree, Rob A.; And Others
1988-01-01
Describes a deterministic computer model that incorporates: (1) initial framework composition; (2) abrasion factors for quartz, feldspar, and rock fragments; and (3) a fragmentation ratio for rock fragments to simulate the recycling of coastal sands by rivers and beaches. (TW)
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
High performance TWT development for the microwave power module
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whaley, D.R.; Armstrong, C.M.; Groshart, G.
1996-12-31
Northrop Grumman`s ongoing development of microwave power modules (MPM) provides microwave power at various power levels, frequencies, and bandwidths for a variety of applications. Present day requirements for the vacuum power booster traveling wave tubes of the microwave power module are becoming increasingly more demanding, necessitating the need for further enhancement of tube performance. The MPM development program at Northrop Grumman is designed specifically to meet this need by construction and test of a series of new tubes aimed at verifying computation and reaching high efficiency design goals. Tubes under test incorporate several different helix designs, as well as varyingmore » electron gun and magnetic confinement configurations. Current efforts also include further development of state-of-the-art TWT modeling and computational methods at Northrop Grumman incorporating new, more accurate models into existing design tools and developing new tools to be used in all aspects of traveling wave tube design. Current status of the Northrop Grumman MPM TWT development program will be presented.« less
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Satellite data collected over Lake Ontario were processed to observed surface temperature values. This involved computing apparent radiance values for each point where surface temperatures were known from averaged digital count values. These radiance values were then converted by using the LOWTRAN 5A atmospheric propagation model. This model was modified by incorporating a spectral response function for the LANDSAT band 6 sensors. A downwelled radiance term derived from LOWTRAN was included to account for reflected sky radiance. A blackbody equivalent source radiance was computed. Measured temperatures were plotted against the predicted temperature. The RMS error between the data sets is 0.51K.
Molecular modeling of biomolecules by paramagnetic NMR and computational hybrid methods.
Pilla, Kala Bharath; Gaalswyk, Kari; MacCallum, Justin L
2017-11-01
The 3D atomic structures of biomolecules and their complexes are key to our understanding of biomolecular function, recognition, and mechanism. However, it is often difficult to obtain structures, particularly for systems that are complex, dynamic, disordered, or exist in environments like cell membranes. In such cases sparse data from a variety of paramagnetic NMR experiments offers one possible source of structural information. These restraints can be incorporated in computer modeling algorithms that can accurately translate the sparse experimental data into full 3D atomic structures. In this review, we discuss various types of paramagnetic NMR/computational hybrid modeling techniques that can be applied to successful modeling of not only the atomic structure of proteins but also their interacting partners. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
A catchment scale water balance model for FIFE
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.
1992-01-01
A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.
Collisional transport across the magnetic field in drift-fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madsen, J., E-mail: jmad@fysik.dtu.dk; Naulin, V.; Nielsen, A. H.
2016-03-15
Drift ordered fluid models are widely applied in studies of low-frequency turbulence in the edge and scrape-off layer regions of magnetically confined plasmas. Here, we show how collisional transport across the magnetic field is self-consistently incorporated into drift-fluid models without altering the drift-fluid energy integral. We demonstrate that the inclusion of collisional transport in drift-fluid models gives rise to diffusion of particle density, momentum, and pressures in drift-fluid turbulence models and, thereby, obviates the customary use of artificial diffusion in turbulence simulations. We further derive a computationally efficient, two-dimensional model, which can be time integrated for several turbulence de-correlation timesmore » using only limited computational resources. The model describes interchange turbulence in a two-dimensional plane perpendicular to the magnetic field located at the outboard midplane of a tokamak. The model domain has two regions modeling open and closed field lines. The model employs a computational expedient model for collisional transport. Numerical simulations show good agreement between the full and the simplified model for collisional transport.« less
Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena
2018-01-01
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena
2018-01-01
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Incorporating the gas analyzer response time in gas exchange computations.
Mitchell, R R
1979-11-01
A simple method for including the gas analyzer response time in the breath-by-breath computation of gas exchange rates is described. The method uses a difference equation form of a model for the gas analyzer in the computation of oxygen uptake and carbon dioxide production and avoids a numerical differentiation required to correct the gas fraction wave forms. The effect of not accounting for analyzer response time is shown to be a 20% underestimation in gas exchange rate. The present method accurately measures gas exchange rate, is relatively insensitive to measurement errors in the analyzer time constant, and does not significantly increase the computation time.
NASA Astrophysics Data System (ADS)
Shi, X.; Utada, H.; Jiaying, W.
2009-12-01
The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.
NASA Astrophysics Data System (ADS)
Zhang, K.; Gasiewski, A. J.
2017-12-01
A horizontally inhomogeneous unified microwave radiative transfer (HI-UMRT) model based upon a nonspherical hydrometeor scattering model is being developed at the University of Colorado at Boulder to facilitate forward radiative simulations for 3-dimensionally inhomogeneous clouds in severe weather. The HI-UMRT 3-D analytical solution is based on incorporating a planar-stratified 1-D UMRT algorithm within a horizontally inhomogeneous iterative perturbation scheme. Single-scattering parameters are computed using the Discrete Dipole Scattering (DDSCAT v7.3) program for hundreds of carefully selected nonspherical complex frozen hydrometeors from the NASA/GSFC DDSCAT database. The required analytic factorization symmetry of transition matrix in a normalized RT equation was analytically proved and validated numerically using the DDSCAT-based full Stokes matrix of randomly oriented hydrometeors. The HI-UMRT model thus inherits the properties of unconditional numerical stability, efficiency, and accuracy from the UMRT algorithm and provides a practical 3-D two-Stokes parameter radiance solution with Jacobian to be used within microwave retrievals and data assimilation schemes. In addition, a fast forward radar reflectivity operator with Jacobian based on DDSCAT backscatter efficiency computed for large hydrometeors is incorporated into the HI-UMRT model to provide applicability to active radar sensors. The HI-UMRT will be validated strategically at two levels: 1) intercomparison of brightness temperature (Tb) results with those of several 1-D and 3-D RT models, including UMRT, CRTM and Monte Carlo models, 2) intercomparison of Tb with observed data from combined passive and active spaceborne sensors (e.g. GPM GMI and DPR). The precise expression for determining the required number of 3-D iterations to achieve an error bound on the perturbation solution will be developed to facilitate the numerical verification of the HI-UMRT code complexity and computation performance.
A computational procedure for multibody systems including flexible beam dynamics
NASA Technical Reports Server (NTRS)
Downer, J. D.; Park, K. C.; Chiou, J. C.
1990-01-01
A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.
Experiences in Automated Calibration of a Nickel Equation of State
NASA Astrophysics Data System (ADS)
Carpenter, John H.
2017-06-01
Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Impact of composite plates: Analysis of stresses and forces
NASA Technical Reports Server (NTRS)
Moon, F. C.; Kim, B. S.; Fang-Landau, S. R.
1976-01-01
The foreign object damage resistance of composite fan blades was studied. Edge impact stresses in an anisotropic plate were first calculated incorporating a constrained layer damping model. It is shown that a very thin damping layer can dramatically decrease the maximum normal impact stresses. A multilayer model of a composite plate is then presented which allows computation of the interlaminar normal and shear stresses. Results are presented for the stresses due to a line impact load normal to the plane of a composite plate. It is shown that significant interlaminar tensile stresses can develop during impact. A computer code was developed for this problem using the fast Fourier transform. A marker and cell computer code were also used to investigate the hydrodynamic impact of a fluid slug against a wall or turbine blade. Application of fluid modeling of bird impact is reviewed.
Computational experience with a three-dimensional rotary engine combustion model
NASA Astrophysics Data System (ADS)
Raju, M. S.; Willis, E. A.
1990-04-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
Computational experience with a three-dimensional rotary engine combustion model
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
NASA Technical Reports Server (NTRS)
Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)
1985-01-01
Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Model reduction for agent-based social simulation: coarse-graining a civil violence model.
Zou, Yu; Fonoberov, Vladimir A; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
Model reduction for agent-based social simulation: Coarse-graining a civil violence model
NASA Astrophysics Data System (ADS)
Zou, Yu; Fonoberov, Vladimir A.; Fonoberova, Maria; Mezic, Igor; Kevrekidis, Ioannis G.
2012-06-01
Agent-based modeling (ABM) constitutes a powerful computational tool for the exploration of phenomena involving emergent dynamic behavior in the social sciences. This paper demonstrates a computer-assisted approach that bridges the significant gap between the single-agent microscopic level and the macroscopic (coarse-grained population) level, where fundamental questions must be rationally answered and policies guiding the emergent dynamics devised. Our approach will be illustrated through an agent-based model of civil violence. This spatiotemporally varying ABM incorporates interactions between a heterogeneous population of citizens [active (insurgent), inactive, or jailed] and a population of police officers. Detailed simulations exhibit an equilibrium punctuated by periods of social upheavals. We show how to effectively reduce the agent-based dynamics to a stochastic model with only two coarse-grained degrees of freedom: the number of jailed citizens and the number of active ones. The coarse-grained model captures the ABM dynamics while drastically reducing the computation time (by a factor of approximately 20).
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.; Volino, R. J.; Corke, T. C.; Thomas, F. O.; Huang, J.; Lake, J. P.; King, P. I.
2007-01-01
A transport equation for the intermittency factor is employed to predict the transitional flows in low-pressure turbines. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub p) with the intermittency factor, gamma. Turbulent quantities are predicted using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model which can produce both the experimentally observed streamwise variation of intermittency and a realistic profile in the cross stream direction. The model had been previously validated against low-pressure turbine experiments with success. In this paper, the model is applied to predictions of three sets of recent low-pressure turbine experiments on the Pack B blade to further validate its predicting capabilities under various flow conditions. Comparisons of computational results with experimental data are provided. Overall, good agreement between the experimental data and computational results is obtained. The new model has been shown to have the capability of accurately predicting transitional flows under a wide range of low-pressure turbine conditions.
Airport-Noise Levels and Annoyance Model (ALAMO) user's guide
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
A guide for the use of the Airport-Noise Level and Annoyance MOdel (ALAMO) at the Langley Research Center computer complex is provided. This document is divided into 5 primary sections, the introduction, the purpose of the model, and an in-depth description of the following subsystems: baseline, noise reduction simulation and track analysis. For each subsystem, the user is provided with a description of architecture, an explanation of subsystem use, sample results, and a case runner's check list. It is assumed that the user is familiar with the operations at the Langley Research Center (LaRC) computer complex, the Network Operating System (NOS 1.4) and CYBER Control Language. Incorporated within the ALAMO model is a census database system called SITE II.
Zhang, Jing; Zhang, Rimei; Ren, Guanghui; Zhang, Xiaojie
2017-02-01
This article describes a method that incorporates the solid modeling CAD software Solidworks with a dental milling machine to fabricate individual abutments in house. This process involves creating an implant library with 3-dimensional (3D) models and manufacturing a base, scan element, abutment, and crown anatomy. The 3D models can be imported into any dental computer-aided design and computer-aided (CAD-CAM) manufacturing system. This platform increases abutment design flexibility, as the base and scan elements can be designed to fit several shapes as needed to meet clinical requirements. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Computers in mathematics: teacher-inservice training at a distance
NASA Astrophysics Data System (ADS)
Friedman, Edward A.; Jurkat, M. P.
1993-01-01
While research and experience show many advantages for incorporation of computer technology into secondary school mathematics instruction, less than 5 percent of the nation's teachers are actively using computers in their classrooms. This is the case even though mathematics teachers in grades 7 - 12 are often familiar with computer technology and have computers available to them in their schools. The implementation bottleneck is in-service teacher training and there are few models of effective implementation available for teachers to emulate. Stevens Institute of Technology has been active since 1988 in research and development efforts to incorporate computers into classroom use. We have found that teachers need to see examples of classroom experience with hardware and software and they need to have assistance as they experiment with applications of software and the development of lesson plans. High-band width technology can greatly facilitate teacher training in this area through transmission of video documentaries, software discussions, teleconferencing, peer interactions, classroom observations, etc. We discuss the experience that Stevens has had with face-to-face teacher training as well as with satellite-based teleconferencing using one-way video and two- way audio. Included are reviews of analyses of this project by researchers from Educational Testing Service, Princeton University, and Bank Street School of Education.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Multiple altitude TM thermal infrared images were analyzed and the observed radiance values were computed. The data obtained represent an experimental relation between preceived radiance and altitude. A LOWTRAB approach was tested which incorporates a modification to the path radiance model. This modification assumes that the scattering out of the optical path is equal in magnitude and direction to the scattering into the path. The radiance observed at altitude by an aircraft sensor was used as input to the model. Expected radiance as a function of altitude was then computed down to the ground. The results were not very satisfactory because of somewhat large errors in temperature and because of the difference in the shape of the modeled and experimental curves.
Numerical Simulations of Plasma Based Flow Control Applications
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Jacob, J. D.; Ashpis, D. E.
2005-01-01
A mathematical model was developed to simulate flow control applications using plasma actuators. The effects of the plasma actuators on the external flow are incorporated into Navier Stokes computations as a body force vector. In order to compute this body force vector, the model solves two additional equations: one for the electric field due to the applied AC voltage at the electrodes and the other for the charge density representing the ionized air. The model is calibrated against an experiment having plasma-driven flow in a quiescent environment and is then applied to simulate a low pressure turbine flow with large flow separation. The effects of the plasma actuator on control of flow separation are demonstrated numerically.
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less
NASA Astrophysics Data System (ADS)
Sakaizawa, Ryosuke; Kawai, Takaya; Sato, Toru; Oyama, Hiroyuki; Tsumune, Daisuke; Tsubono, Takaki; Goto, Koichi
2018-03-01
The target seas of tidal-current models are usually semi-closed bays, minimally affected by ocean currents. For these models, tidal currents are simulated in computational domains with a spatial scale of a couple hundred kilometers or less, by setting tidal elevations at their open boundaries. However, when ocean currents cannot be ignored in the sea areas of interest, such as in open seas near coastlines, it is necessary to include ocean-current effects in these tidal-current models. In this study, we developed a numerical method to analyze tidal currents near coasts by incorporating pre-calculated ocean-current velocities. First, a large regional-scale simulation with a spatial scale of several thousand kilometers was conducted and temporal changes in the ocean-current velocity at each grid point were stored. Next, the spatially and temporally interpolated ocean-current velocity was incorporated as forcing into the cross terms of the convection term of a tidal-current model having computational domains with spatial scales of hundreds of kilometers or less. Then, we applied this method to the diffusion of dissolved CO2 in a sea area off Tomakomai, Japan, and compared the numerical results and measurements to validate the proposed method.
A model for the rapid assessment of the impact of aviation noise near airports.
Torija, Antonio J; Self, Rod H; Flindell, Ian H
2017-02-01
This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.
Full cell simulation and the evaluation of the buffer system on air-cathode microbial fuel cell
Ou, Shiqi; Kashima, Hiroyuki; Aaron, Douglas S.; ...
2017-02-23
This paper presents a computational model of a single chamber, air-cathode MFC. The model considers losses due to mass transport, as well as biological and electrochemical reactions, in both the anode and cathode half-cells. Computational fluid dynamics and Monod-Nernst analysis are incorporated into the reactions for the anode biofilm and cathode Pt catalyst and biofilm. The integrated model provides a macro-perspective of the interrelation between the anode and cathode during power production, while incorporating microscale contributions of mass transport within the anode and cathode layers. Model considerations include the effects of pH (H +/OH – transport) and electric field-driven migrationmore » on concentration overpotential, effects of various buffers and various amounts of buffer on the pH in the whole reactor, and overall impacts on the power output of the MFC. The simulation results fit the experimental polarization and power density curves well. Further, this model provides insight regarding mass transport at varying current density regimes and quantitative delineation of overpotentials at the anode and cathode. Altogether, this comprehensive simulation is designed to accurately predict MFC performance based on fundamental fluid and kinetic relations and guide optimization of the MFC system.« less
2005-02-01
literature to be used in modeling of the results (10). 2. Background The separation of the regions of highest particulate and aromatic concentrations... modeling calculations incorporating the well-characterized C2 combustion mechanism of Frenklach et al. (10). This mechanism was developed for...experimentally and modeled , and shown to occur via different pathways within the context of a detailed chemical mechanism. In particular, ethanol
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Computational simulation of extravehicular activity dynamics during a satellite capture attempt.
Schaffner, G; Newman, D J; Robinson, S K
2000-01-01
A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.
Warren, K M; Mpagazehe, J N; LeDuc, P R; Higgs, C F
2016-02-07
The response of individual cells at the micro-scale in cell mechanics is important in understanding how they are affected by changing environments. To control cell stresses, microfluidics can be implemented since there is tremendous control over the geometry of the devices. Designing microfluidic devices to induce and manipulate stress levels on biological cells can be aided by computational modeling approaches. Such approaches serve as an efficient precursor to fabricating various microfluidic geometries that induce predictable levels of stress on biological cells, based on their mechanical properties. Here, a three-dimensional, multiphase computational fluid dynamics (CFD) modeling approach was implemented for soft biological materials. The computational model incorporates the physics of the particle dynamics, fluid dynamics and solid mechanics, which allows us to study how stresses affect the cells. By using an Eulerian-Lagrangian approach to treat the fluid domain as a continuum in the microfluidics, we are conducting studies of the cells' movement and the stresses applied to the cell. As a result of our studies, we were able to determine that a channel with periodically alternating columns of obstacles was capable of stressing cells at the highest rate, and that microfluidic systems can be engineered to impose heterogenous cell stresses through geometric configuring. We found that when using controlled geometries of the microfluidics channels with staggered obstructions, we could increase the maximum cell stress by nearly 200 times over cells flowing through microfluidic channels with no obstructions. Incorporating computational modeling in the design of microfluidic configurations for controllable cell stressing could help in the design of microfludic devices for stressing cells such as cell homogenizers.
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Shao, Xiongjun; Lynd, Lee; Wyman, Charles; Bakker, André
2009-01-01
The model of South et al. [South et al. (1995) Enzyme Microb Technol 17(9): 797-803] for simultaneous saccharification of fermentation of cellulosic biomass is extended and modified to accommodate intermittent feeding of substrate and enzyme, cascade reactor configurations, and to be more computationally efficient. A dynamic enzyme adsorption model is found to be much more computationally efficient than the equilibrium model used previously, thus increasing the feasibility of incorporating the kinetic model in a computational fluid dynamic framework in the future. For continuous or discretely fed reactors, it is necessary to use particle conversion in conversion-dependent hydrolysis rate laws rather than reactor conversion. Whereas reactor conversion decreases due to both reaction and exit of particles from the reactor, particle conversion decreases due to reaction only. Using the modified models, it is predicted that cellulose conversion increases with decreasing feeding frequency (feedings per residence time, f). A computationally efficient strategy for modeling cascade reactors involving a modified rate constant is shown to give equivalent results relative to an exhaustive approach considering the distribution of particles in each successive fermenter.
1993-11-01
way is to develop a crude but working model of an entire system. The other is by developing a realistic model of the user interface , leaving out most...devices or by incorporating software for a more user -friendly interface . Automation introduces the possibility of making data entry errors. Multimode...across various human- computer interfaces . 127 a Memory: Minimize the amount of information that the user must maintain in short-term memory
The Collaborative Seismic Earth Model Project
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Herwaarden, D. P.; Afanasiev, M.
2017-12-01
We present the first generation of the Collaborative Seismic Earth Model (CSEM). This effort is intended to address grand challenges in tomography that currently inhibit imaging the Earth's interior across the seismically accessible scales: [1] For decades to come, computational resources will remain insufficient for the exploitation of the full observable seismic bandwidth. [2] With the man power of individual research groups, only small fractions of available waveform data can be incorporated into seismic tomographies. [3] The limited incorporation of prior knowledge on 3D structure leads to slow progress and inefficient use of resources. The CSEM is a multi-scale model of global 3D Earth structure that evolves continuously through successive regional refinements. Taking the current state of the CSEM as initial model, these refinements are contributed by external collaborators, and used to advance the CSEM to the next state. This mode of operation allows the CSEM to [1] harness the distributed man and computing power of the community, [2] to make consistent use of prior knowledge, and [3] to combine different tomographic techniques, needed to cover the seismic data bandwidth. Furthermore, the CSEM has the potential to serve as a unified and accessible representation of tomographic Earth models. Generation 1 comprises around 15 regional tomographic refinements, computed with full-waveform inversion. These include continental-scale mantle models of North America, Australasia, Europe and the South Atlantic, as well as detailed regional models of the crust beneath the Iberian Peninsula and western Turkey. A global-scale full-waveform inversion ensures that regional refinements are consistent with whole-Earth structure. This first generation will serve as the basis for further automation and methodological improvements concerning validation and uncertainty quantification.
Modeling Images of Natural 3D Surfaces: Overview and Potential Applications
NASA Technical Reports Server (NTRS)
Jalobeanu, Andre; Kuehnel, Frank; Stutz, John
2004-01-01
Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.; ...
2017-11-26
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography. Our derivation, which is based on the rate summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills maturemore » pine trees. This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
Bikson, Marom; Rahman, Asif; Datta, Abhishek; Fregni, Felipe; Merabet, Lotfi
2012-01-01
Objectives Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity currents facilitating or inhibiting spontaneous neuronal activity. tDCS is attractive since dose is readily adjustable by simply changing electrode number, position, size, shape, and current. In the recent past, computational models have been developed with increased precision with the goal to help customize tDCS dose. The aim of this review is to discuss the incorporation of high-resolution patient-specific computer modeling to guide and optimize tDCS. Methods In this review, we discuss the following topics: (i) The clinical motivation and rationale for models of transcranial stimulation is considered pivotal in order to leverage the flexibility of neuromodulation; (ii) The protocols and the workflow for developing high-resolution models; (iii) The technical challenges and limitations of interpreting modeling predictions, and (iv) Real cases merging modeling and clinical data illustrating the impact of computational models on the rational design of rehabilitative electrotherapy. Conclusions Though modeling for non-invasive brain stimulation is still in its development phase, it is predicted that with increased validation, dissemination, simplification and democratization of modeling tools, computational forward models of neuromodulation will become useful tools to guide the optimization of clinical electrotherapy. PMID:22780230
On 3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Chen, P. C.; Dame, L. T.; Holt, R. V.; Huang, H.; Hartle, M.; Gellin, S.; Allen, D. H.; Haisler, W. E.
1986-01-01
Accomplishments are described for the 2-year program, to develop advanced 3-D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades and vanes. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulations models were developed; an eight-noded mid-surface shell element, a nine-noded mid-surface shell element and a twenty-noded isoparametric solid element. A separate computer program was developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.
The 3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.
1992-01-01
A two-year program to develop advanced 3D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades, and vanes is described. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulation models were developed: an eight-noded midsurface shell element; a nine-noded midsurface shell element; and a twenty-noded isoparametric solid element. A separate computer program has been developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.
Program Helps To Determine Chemical-Reaction Mechanisms
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Radhakrishnan, K.
1995-01-01
General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
Development of a Real-Time Intelligent Network Environment.
ERIC Educational Resources Information Center
Gordonov, Anatoliy; Kress, Michael; Klibaner, Roberta
This paper presents a model of an intelligent computer network that provides real-time evaluation of students' performance by incorporating intelligence into the application layer protocol. Specially designed drills allow students to independently solve a number of problems based on current lecture material; students are switched to the most…
Student Satisfaction with Online Learning: Is It a Psychological Contract?
ERIC Educational Resources Information Center
Dziuban, Charles; Moskal, Patsy; Thompson, Jessica; Kramer, Lauren; DeCantis, Genevieve; Hermsdorfer, Andrea
2015-01-01
The authors explore the possible relationship between student satisfaction with online learning and the theory of psychological contracts. The study incorporates latent trait models using the image analysis procedure and computation of Anderson and Rubin factors scores with contrasts for students who are satisfied, ambivalent, or dissatisfied with…
This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...
The super-Turing computational power of plastic recurrent neural networks.
Cabessa, Jérémie; Siegelmann, Hava T
2014-12-01
We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1993-01-01
Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1992-01-01
Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.
Graded meshes in bio-thermal problems with transmission-line modeling method.
Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G
2014-10-01
In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
NASA Technical Reports Server (NTRS)
Knight, Doyle D.; Badekas, Dias
1991-01-01
The swept oblique shock-wave/turbulent-boundary-layer interaction generated by a 20-deg sharp fin at Mach 4 and Reynolds number 21,000 is investigated via a series of computations using both conical and three-dimensional Reynolds-averaged Navier-Stokes equations with turbulence incorporated through the algebraic turbulent eddy viscosity model of Baldwin-Lomax. Results are compared with known experimental data, and it is concluded that the computed three-dimensional flowfield is quasi-conical (in agreement with the experimental data), the computed three-dimensional and conical surface pressure and surface flow direction are in good agreement with the experiment, and the three-dimensional and conical flows significantly underpredict the peak experimental skin friction. It is pointed out that most of the features of the conical flowfield model in the experiment are observed in the conical computation which also describes the complete conical streamline pattern not included in the model of the experiment.
Forced response of mistuned bladed disk assemblies
NASA Technical Reports Server (NTRS)
Watson, Brian C.; Kamat, Manohar P.; Murthy, Durbha V.
1993-01-01
A complete analytic model of mistuned bladed disk assemblies, designed to simulate the dynamical behavior of these systems, is analyzed. The model incorporates a generalized method for describing the mistuning of the assembly through the introduction of specific mistuning modes. The model is used to develop a computational bladed disk assembly model for a series of parametric studies. Results are presented demonstrating that the response amplitudes of bladed disk assemblies depend both on the excitation mode and on the mistune mode.
VARTM Model Development and Verification
NASA Technical Reports Server (NTRS)
Cano, Roberto J. (Technical Monitor); Dowling, Norman E.
2004-01-01
In this investigation, a comprehensive Vacuum Assisted Resin Transfer Molding (VARTM) process simulation model was developed and verified. The model incorporates resin flow through the preform, compaction and relaxation of the preform, and viscosity and cure kinetics of the resin. The computer model can be used to analyze the resin flow details, track the thickness change of the preform, predict the total infiltration time and final fiber volume fraction of the parts, and determine whether the resin could completely infiltrate and uniformly wet out the preform.
Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu
2008-01-01
A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.
High performance MRI simulations of motion on multi-GPU systems.
Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H
2014-07-04
MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications.
O/S analysis of conceptual space vehicles. Part 1
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1995-01-01
The application of recently developed computer models in determining operational capabilities and support requirements during the conceptual design of proposed space systems is discussed. The models used are the reliability and maintainability (R&M) model, the maintenance simulation model, and the operations and support (O&S) cost model. In the process of applying these models, the R&M and O&S cost models were updated. The more significant enhancements include (1) improved R&M equations for the tank subsystems, (2) the ability to allocate schedule maintenance by subsystem, (3) redefined spares calculations, (4) computing a weighted average of the working days and mission days per month, (5) the use of a position manning factor, and (6) the incorporation into the O&S model of new formulas for computing depot and organizational recurring and nonrecurring training costs and documentation costs, and depot support equipment costs. The case study used is based upon a winged, single-stage, vertical-takeoff vehicle (SSV) designed to deliver to the Space Station Freedom (SSF) a 25,000 lb payload including passengers without a crew.
Modelling milk production from feed intake in dairy cattle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, D.L.
1985-05-01
Predictive models were developed for both Holstein and Jersey cows. Since Holsteins comprised eighty-five percent of the data, the predictive models developed for Holsteins were used for the development of a user-friendly computer model. Predictive models included: milk production (squared multiple correlation .73), natural log (ln) of milk production (.73), four percent fat-corrected milk (.67), ln four percent fat-corrected milk (.68), fat-free milk (.73), ln fat-free milk (.73), dry matter intake (.61), ln dry matter intake (.60), milk fat (.52), and ln milk fat (.56). The predictive models for ln milk production, ln fat-free milk and ln dry matter intakemore » were incorporated into a computer model. The model was written in standard Fortran for use on mainframe or micro-computers. Daily milk production, fat-free milk production, and dry matter intake were predicted on a daily basis with the previous day's dry matter intake serving as an independent variable in the prediction of the daily milk and fat-free milk production. 21 refs.« less
Computational models for the analysis of three-dimensional internal and exhaust plume flowfields
NASA Technical Reports Server (NTRS)
Dash, S. M.; Delguidice, P. D.
1977-01-01
This paper describes computational procedures developed for the analysis of three-dimensional supersonic ducted flows and multinozzle exhaust plume flowfields. The models/codes embodying these procedures cater to a broad spectrum of geometric situations via the use of multiple reference plane grid networks in several coordinate systems. Shock capturing techniques are employed to trace the propagation and interaction of multiple shock surfaces while the plume interface, separating the exhaust and external flows, and the plume external shock are discretely analyzed. The computational grid within the reference planes follows the trace of streamlines to facilitate the incorporation of finite-rate chemistry and viscous computational capabilities. Exhaust gas properties consist of combustion products in chemical equilibrium. The computational accuracy of the models/codes is assessed via comparisons with exact solutions, results of other codes and experimental data. Results are presented for the flows in two-dimensional convergent and divergent ducts, expansive and compressive corner flows, flow in a rectangular nozzle and the plume flowfields for exhausts issuing out of single and multiple rectangular nozzles.
Rosenfeld, Alan L; Mandelaris, George A; Tardieu, Philippe B
2006-08-01
The purpose of this paper is to expand on part 1 of this series (published in the previous issue) regarding the emerging future of computer-guided implant dentistry. This article will introduce the concept of rapid-prototype medical modeling as well as describe the utilization and fabrication of computer-generated surgical drilling guides used during implant surgery. The placement of dental implants has traditionally been an intuitive process, whereby the surgeon relies on mental navigation to achieve optimal implant positioning. Through rapid-prototype medical modeling and the ste-reolithographic process, surgical drilling guides (eg, SurgiGuide) can be created. These guides are generated from a surgical implant plan created with a computer software system that incorporates all relevant prosthetic information from which the surgical plan is developed. The utilization of computer-generated planning and stereolithographically generated surgical drilling guides embraces the concept of collaborative accountability and supersedes traditional mental navigation on all levels of implant therapy.
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.
2018-01-01
The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.
Kostal, Jakub; Voutchkova-Kostal, Adelina
2016-01-19
Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.
Boer, S.; Elias, E.; Aarninkhof, S.; Roelvink, D.; Vellinga, T.
2007-01-01
Morphological model computations based on uniform (non-graded) sediment revealed an unrealistically strong scour of the sea floor in the immediate vicinity to the west of Maasvlakte 2. By means of a state-of-the-art graded sediment transport model the effect of natural armouring and sorting of bed material on the scour process has been examined. Sensitivity computations confirm that the development of the scour hole is strongly reduced due to the incorporation of armouring processes, suggesting an approximately 30% decrease in terms of erosion area below the -20m depth contour. ?? 2007 ASCE.
NASA Technical Reports Server (NTRS)
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
Development of a thermal storage module using modified anhydrous sodium hydroxide
NASA Technical Reports Server (NTRS)
Rice, R. E.; Rowny, P. E.
1980-01-01
The laboratory scale testing of a modified anhydrous NaOH latent heat storage concept for small solar thermal power systems such as total energy systems utilizing organic Rankine systems is discussed. A diagnostic test on the thermal energy storage module and an investigation of alternative heat transfer fluids and heat exchange concepts are specifically addressed. A previously developed computer simulation model is modified to predict the performance of the module in a solar total energy system environment. In addition, the computer model is expanded to investigate parametrically the incorporation of a second heat exchange inside the module which will vaporize and superheat the Rankine cycle power fluid.
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Gates, R. M.; Straayer, J. W.
1975-01-01
The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
NASA Astrophysics Data System (ADS)
Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang
2015-07-01
This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of the Students & Technology in Academia, Research, and Service (STARS) Alliance, an NSF-supported broadening participation in computing initiative that aims to diversify the computer science pipeline through innovative pedagogy and inter-institutional partnerships. The current paper describes how the STARS Alliance has expanded to diverse institutions, all using service learning as a vehicle for broadening participation in computing and enhancing attitudes and behaviors associated with student success. Results supported the STARS model of service learning for enhancing computing efficacy and computing commitment and for providing diverse students with many personal and professional development benefits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.G.; Mioduszewski, R.J.
The Chemical Computer Man: Chemical Agent Response Simulation (CARS) is a computer model and simulation program for estimating the dynamic changes in human physiological dysfunction resulting from exposures to chemical-threat nerve agents. The newly developed CARS methodology simulates agent exposure effects on the following five indices of human physiological function: mental, vision, cardio-respiratory, visceral, and limbs. Mathematical models and the application of basic pharmacokinetic principles were incorporated into the simulation so that for each chemical exposure, the relationship between exposure dosage, absorbed dosage (agent blood plasma concentration), and level of physiological response are computed as a function of time. CARS,more » as a simulation tool, is designed for the users with little or no computer-related experience. The model combines maximum flexibility with a comprehensive user-friendly interactive menu-driven system. Users define an exposure problem and obtain immediate results displayed in tabular, graphical, and image formats. CARS has broad scientific and engineering applications, not only in technology for the soldier in the area of Chemical Defense, but also in minimizing animal testing in biomedical and toxicological research and the development of a modeling system for human exposure to hazardous-waste chemicals.« less
Atomistic calculations of interface elastic properties in noncoherent metallic bilayers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mi Changwen; Jun, Sukky; Kouris, Demitris A.
2008-02-15
The paper describes theoretical and computational studies associated with the interface elastic properties of noncoherent metallic bicrystals. Analytical forms of interface energy, interface stresses, and interface elastic constants are derived in terms of interatomic potential functions. Embedded-atom method potentials are then incorporated into the model to compute these excess thermodynamics variables, using energy minimization in a parallel computing environment. The proposed model is validated by calculating surface thermodynamic variables and comparing them with preexisting data. Next, the interface elastic properties of several fcc-fcc bicrystals are computed. The excess energies and stresses of interfaces are smaller than those on free surfacesmore » of the same crystal orientations. In addition, no negative values of interface stresses are observed. Current results can be applied to various heterogeneous materials where interfaces assume a prominent role in the systems' mechanical behavior.« less
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Byron, O
1997-01-01
Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
2001-01-01
A finite-difference computer program (COSIM) has been written which models the one-dimensional, diffusional transport associated with high-temperature oxidation and interdiffusion of overlay-coated substrates. The program predicts concentration profiles for up to three elements in the coating and substrate after various oxidation exposures. Surface recession due to solute loss is also predicted. Ternary cross terms and concentration-dependent diffusion coefficients are taken into account. The program also incorporates a previously-developed oxide growth and spalling model to simulate either isothermal or cyclic oxidation exposures. In addition to predicting concentration profiles after various oxidation exposures, the program can also be used to predict coating life based on a concentration dependent failure criterion (e.g., surface solute content drops to 2%). The computer code is written in FORTRAN and employs numerous subroutines to make the program flexible and easily modifiable to other coating oxidation problems.
Incorporation of the TIP4P water model into a continuum solvent for computing solvation free energy
NASA Astrophysics Data System (ADS)
Yang, Pei-Kun
2014-10-01
The continuum solvent model is one of the commonly used strategies to compute solvation free energy especially for large-scale conformational transitions such as protein folding or to calculate the binding affinity of protein-protein/ligand interactions. However, the dielectric polarization for computing solvation free energy from the continuum solvent is different than that obtained from molecular dynamic simulations. To mimic the dielectric polarization surrounding a solute in molecular dynamic simulations, the first-shell water molecules was modeled using a charge distribution of TIP4P in a hard sphere; the time-averaged charge distribution from the first-shell water molecules were estimated based on the coordination number of the solute, and the orientation distribution of the first-shell waters and the intermediate water molecules were treated as that of a bulk solvent. Based on this strategy, an equation describing the solvation free energy of ions was derived.
NASA Technical Reports Server (NTRS)
Streett, C. L.
1981-01-01
A viscous-inviscid interaction method has been developed by using a three-dimensional integral boundary-layer method which produces results in good agreement with a finite-difference method in a fraction of the computer time. The integral method is stable and robust and incorporates a model for computation in a small region of streamwise separation. A locally two-dimensional wake model, accounting for thickness and curvature effects, is also included in the interaction procedure. Computation time spent in converging an interacted result is, many times, only slightly greater than that required to converge an inviscid calculation. Results are shown from the interaction method, run at experimental angle of attack, Reynolds number, and Mach number, on a wing-body test case for which viscous effects are large. Agreement with experiment is good; in particular, the present wake model improves prediction of the spanwise lift distribution and lower surface cove pressure.
NASA Technical Reports Server (NTRS)
Armstrong, Richard; Hardman, Molly
1991-01-01
A snow model that supports the daily, operational analysis of global snow depth and age has been developed. It provides improved spatial interpolation of surface reports by incorporating digital elevation data, and by the application of regionalized variables (kriging) through the use of a global snow depth climatology. Where surface observations are inadequate, the model applies satellite remote sensing. Techniques for extrapolation into data-void mountain areas and a procedure to compute snow melt are also contained in the model.
A visiting scientist program in atmospheric sciences for the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Davis, M. H.
1989-01-01
A visiting scientist program was conducted in the atmospheric sciences and related areas at the Goddard Laboratory for Atmospheres. Research was performed in mathematical analysis as applied to computer modeling of the atmospheres; development of atmospheric modeling programs; analysis of remotely sensed atmospheric, surface, and oceanic data and its incorporation into atmospheric models; development of advanced remote sensing instrumentation; and related research areas. The specific research efforts are detailed by tasks.
Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies
NASA Technical Reports Server (NTRS)
Tatom, F. B.; King, R. L.
1977-01-01
The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.
Toward A Simulation-Based Tool for the Treatment of Vocal Fold Paralysis
Mittal, Rajat; Zheng, Xudong; Bhardwaj, Rajneesh; Seo, Jung Hee; Xue, Qian; Bielamowicz, Steven
2011-01-01
Advances in high-performance computing are enabling a new generation of software tools that employ computational modeling for surgical planning. Surgical management of laryngeal paralysis is one area where such computational tools could have a significant impact. The current paper describes a comprehensive effort to develop a software tool for planning medialization laryngoplasty where a prosthetic implant is inserted into the larynx in order to medialize the paralyzed vocal fold (VF). While this is one of the most common procedures used to restore voice in patients with VF paralysis, it has a relatively high revision rate, and the tool being developed is expected to improve surgical outcomes. This software tool models the biomechanics of airflow-induced vibration in the human larynx and incorporates sophisticated approaches for modeling the turbulent laryngeal flow, the complex dynamics of the VFs, as well as the production of voiced sound. The current paper describes the key elements of the modeling approach, presents computational results that demonstrate the utility of the approach and also describes some of the limitations and challenges. PMID:21556320
Modeling and Validation of Microwave Ablations with Internal Vaporization
Chiang, Jason; Birla, Sohan; Bedoya, Mariajose; Jones, David; Subbiah, Jeyam; Brace, Christopher L.
2014-01-01
Numerical simulation is increasingly being utilized for computer-aided design of treatment devices, analysis of ablation growth, and clinical treatment planning. Simulation models to date have incorporated electromagnetic wave propagation and heat conduction, but not other relevant physics such as water vaporization and mass transfer. Such physical changes are particularly noteworthy during the intense heat generation associated with microwave heating. In this work, a numerical model was created that integrates microwave heating with water vapor generation and transport by using porous media assumptions in the tissue domain. The heating physics of the water vapor model was validated through temperature measurements taken at locations 5, 10 and 20 mm away from the heating zone of the microwave antenna in homogenized ex vivo bovine liver setup. Cross-sectional area of water vapor transport was validated through intra-procedural computed tomography (CT) during microwave ablations in homogenized ex vivo bovine liver. Iso-density contours from CT images were compared to vapor concentration contours from the numerical model at intermittent time points using the Jaccard Index. In general, there was an improving correlation in ablation size dimensions as the ablation procedure proceeded, with a Jaccard Index of 0.27, 0.49, 0.61, 0.67 and 0.69 at 1, 2, 3, 4, and 5 minutes. This study demonstrates the feasibility and validity of incorporating water vapor concentration into thermal ablation simulations and validating such models experimentally. PMID:25330481
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.
NASA Technical Reports Server (NTRS)
Pao, J. L.; Mehrotra, S. C.; Lan, C. E.
1982-01-01
A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin
The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less
NASA Astrophysics Data System (ADS)
Wenger, Cornelia; Salvador, Ricardo; Basser, Peter J.; Miranda, Pedro C.
2015-09-01
Tumor treating fields (TTFields) are a non-invasive, anti-mitotic and approved treatment for recurrent glioblastoma multiforme (GBM) patients. In vitro studies have shown that inhibition of cell division in glioma is achieved when the applied alternating electric field has a frequency in the range of 200 kHz and an amplitude of 1-3 V cm-1. Our aim is to calculate the electric field distribution in the brain during TTFields therapy and to investigate the dependence of these predictions on the heterogeneous, anisotropic dielectric properties used in the computational model. A realistic head model was developed by segmenting MR images and by incorporating anisotropic conductivity values for the brain tissues. The finite element method (FEM) was used to solve for the electric potential within a volume mesh that consisted of the head tissues, a virtual lesion with an active tumour shell surrounding a necrotic core, and the transducer arrays. The induced electric field distribution is highly non-uniform. Average field strength values are slightly higher in the tumour when incorporating anisotropy, by about 10% or less. A sensitivity analysis with respect to the conductivity and permittivity of head tissues shows a variation in field strength of less than 42% in brain parenchyma and in the tumour, for values within the ranges reported in the literature. Comparing results to a previously developed head model suggests significant inter-subject variability. This modelling study predicts that during treatment with TTFields the electric field in the tumour exceeds 1 V cm-1, independent of modelling assumptions. In the future, computational models may be useful to optimize delivery of TTFields.
Wenger, Cornelia; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2015-09-21
Tumor treating fields (TTFields) are a non-invasive, anti-mitotic and approved treatment for recurrent glioblastoma multiforme (GBM) patients. In vitro studies have shown that inhibition of cell division in glioma is achieved when the applied alternating electric field has a frequency in the range of 200 kHz and an amplitude of 1-3 V cm(-1). Our aim is to calculate the electric field distribution in the brain during TTFields therapy and to investigate the dependence of these predictions on the heterogeneous, anisotropic dielectric properties used in the computational model. A realistic head model was developed by segmenting MR images and by incorporating anisotropic conductivity values for the brain tissues. The finite element method (FEM) was used to solve for the electric potential within a volume mesh that consisted of the head tissues, a virtual lesion with an active tumour shell surrounding a necrotic core, and the transducer arrays. The induced electric field distribution is highly non-uniform. Average field strength values are slightly higher in the tumour when incorporating anisotropy, by about 10% or less. A sensitivity analysis with respect to the conductivity and permittivity of head tissues shows a variation in field strength of less than 42% in brain parenchyma and in the tumour, for values within the ranges reported in the literature. Comparing results to a previously developed head model suggests significant inter-subject variability. This modelling study predicts that during treatment with TTFields the electric field in the tumour exceeds 1 V cm(-1), independent of modelling assumptions. In the future, computational models may be useful to optimize delivery of TTFields.
Wenger, Cornelia; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2015-01-01
Tumor Treating Fields (TTFields) are a non-invasive, anti-mitotic and approved treatment for recurrent glioblastoma multiforme (GBM) patients. In vitro studies have shown that inhibition of cell division in glioma is achieved when the applied alternating electric field has a frequency in the range of 200 kHz and an amplitude of 1 - 3 V/cm. Our aim is to calculate the electric field distribution in the brain during TTFields therapy and to investigate the dependence of these predictions on the heterogeneous, anisotropic dielectric properties used in the computational model. A realistic head model was developed by segmenting MR images and by incorporating anisotropic conductivity values for the brain tissues. The finite element method (FEM) was used to solve for the electric potential within a volume mesh that consisted of the head tissues, a virtual lesion with an active tumour shell surrounding a necrotic core, and the transducer arrays. The induced electric field distribution is highly non-uniform. Average field strength values are slightly higher in the tumour when incorporating anisotropy, by about 10% or less. A sensitivity analysis with respect to the conductivity and permittivity of head tissues shows a variation in field strength of less than 42% in brain parenchyma and in the tumour, for values within the ranges reported in the literature. Comparing results to a previously developed head model suggests significant inter-subject variability. This modelling study predicts that during treatment with TTFields the electric field in the tumour exceeds 1 V/cm, independent of modelling assumptions. In the future, computational models may be useful to optimize delivery of TTFields. PMID:26350296
CatSim: a new computer assisted tomography simulation environment
NASA Astrophysics Data System (ADS)
De Man, Bruno; Basu, Samit; Chandra, Naveen; Dunham, Bruce; Edic, Peter; Iatrou, Maria; McOlash, Scott; Sainath, Paavana; Shaughnessy, Charlie; Tower, Brendon; Williams, Eugene
2007-03-01
We present a new simulation environment for X-ray computed tomography, called CatSim. CatSim provides a research platform for GE researchers and collaborators to explore new reconstruction algorithms, CT architectures, and X-ray source or detector technologies. The main requirements for this simulator are accurate physics modeling, low computation times, and geometrical flexibility. CatSim allows simulating complex analytic phantoms, such as the FORBILD phantoms, including boxes, ellipsoids, elliptical cylinders, cones, and cut planes. CatSim incorporates polychromaticity, realistic quantum and electronic noise models, finite focal spot size and shape, finite detector cell size, detector cross-talk, detector lag or afterglow, bowtie filtration, finite detector efficiency, non-linear partial volume, scatter (variance-reduced Monte Carlo), and absorbed dose. We present an overview of CatSim along with a number of validation experiments.
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc
2016-10-01
Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Extreme learning machine for reduced order modeling of turbulent geophysical flows.
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Saether, E.; Yamakov, V.
2008-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.
Extreme learning machine for reduced order modeling of turbulent geophysical flows
NASA Astrophysics Data System (ADS)
San, Omer; Maulik, Romit
2018-04-01
We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.
2014-01-01
Background Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. Methods We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. Results In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. Conclusions The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes. PMID:24888356
Sadatsafavi, Mohsen; Marra, Carlo; Aaron, Shawn; Bryan, Stirling
2014-06-03
Cost-effectiveness analyses (CEAs) that use patient-specific data from a randomized controlled trial (RCT) are popular, yet such CEAs are criticized because they neglect to incorporate evidence external to the trial. A popular method for quantifying uncertainty in a RCT-based CEA is the bootstrap. The objective of the present study was to further expand the bootstrap method of RCT-based CEA for the incorporation of external evidence. We utilize the Bayesian interpretation of the bootstrap and derive the distribution for the cost and effectiveness outcomes after observing the current RCT data and the external evidence. We propose simple modifications of the bootstrap for sampling from such posterior distributions. In a proof-of-concept case study, we use data from a clinical trial and incorporate external evidence on the effect size of treatments to illustrate the method in action. Compared to the parametric models of evidence synthesis, the proposed approach requires fewer distributional assumptions, does not require explicit modeling of the relation between external evidence and outcomes of interest, and is generally easier to implement. A drawback of this approach is potential computational inefficiency compared to the parametric Bayesian methods. The bootstrap method of RCT-based CEA can be extended to incorporate external evidence, while preserving its appealing features such as no requirement for parametric modeling of cost and effectiveness outcomes.
Abstracts of ARI Research Publications, FY 1974 and 1975
1979-10-01
may obtain these documents from the National Technical Information Service (NTIS), Department of Commerce, Springfield, Va., 22151. The six- digit AD...Siegel, A. I., Wolf, J. J., & Leahy, W. R. (Applied Psycho- logical Services, Inc.). A digital simulation model of message handling in the Tactical...inherent in the mission of interest, (b) incorporate these 28 into a logic for a digital simulation model, and (c) develop a computer program reflecting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
An underwater light attenuation scheme for marine ecosystem models.
Penta, Bradley; Lee, Zhongping; Kudela, Raphael M; Palacios, Sherry L; Gray, Deric J; Jolliff, Jason K; Shulman, Igor G
2008-10-13
Simulation of underwater light is essential for modeling marine ecosystems. A new model of underwater light attenuation is presented and compared with previous models. In situ data collected in Monterey Bay, CA. during September 2006 are used for validation. It is demonstrated that while the new light model is computationally simple and efficient it maintains accuracy and flexibility. When this light model is incorporated into an ecosystem model, the correlation between modeled and observed coastal chlorophyll is improved over an eight-year time period. While the simulation of a deep chlorophyll maximum demonstrates the effect of the new model at depth.
Ali, A F; Taha, M M Reda; Thornton, G M; Shrive, N G; Frank, C B
2005-06-01
In normal daily activities, ligaments are subjected to repeated loads, and respond to this environment with creep and fatigue. While progressive recruitment of the collagen fibers is responsible for the toe region of the ligament stress-strain curve, recruitment also represents an elegant feature to help ligaments resist creep. The use of artificial intelligence techniques in computational modeling allows a large number of parameters and their interactions to be incorporated beyond the capacity of classical mathematical models. The objective of the work described here is to demonstrate a tool for modeling creep of the rabbit medial collateral ligament that can incorporate the different parameters while quantifying the effect of collagen fiber recruitment during creep. An intelligent algorithm was developed to predict ligament creep. The modeling is performed in two steps: first, the ill-defined fiber recruitment is quantified using the fuzzy logic. Second, this fiber recruitment is incorporated along with creep stress and creep time to model creep using an adaptive neurofuzzy inference system. The model was trained and tested using an experimental database including creep tests and crimp image analysis. The model confirms that quantification of fiber recruitment is important for accurate prediction of ligament creep behavior at physiological loads.
Levels of detail analysis of microwave scattering from human head models for brain stroke detection
2017-01-01
In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115
NASA Technical Reports Server (NTRS)
Middleton, Troy F.; Balla, Robert J.; Baurle, Robert A.; Wilson, Lloyd G.
2008-01-01
Under the Propulsion Discipline of NASA s Fundamental Aeronautics Program s Hypersonics Project, a test apparatus, for testing a scramjet isolator model, is being constructed at NASA's Langley Research Center. The test apparatus will incorporate a 1-inch by 2-inch by 15-inch-long scramjet isolator model supplied with 2.1 lbm/sec of unheated dry air through a Mach 2.5 converging-diverging nozzle. The planned research will incorporate progressively more challenging measurement techniques to characterize the flow field within the isolator, concluding with the application of the Laser-Induced Thermal Acoustic (LITA) measurement technique. The primary goal of this research is to use the data acquired to validate Computational Fluid Dynamics (CFD) models employed to characterize the complex flow field of a scramjet isolator. This paper describes the test apparatus being constructed, pre-test CFD simulations, and the LITA measurement technique.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Oxygen diffusion model of the mixed (U,Pu)O2 ± x: Assessment and application
NASA Astrophysics Data System (ADS)
Moore, Emily; Guéneau, Christine; Crocombette, Jean-Paul
2017-03-01
The uranium-plutonium (U,Pu)O2 ± x mixed oxide (MOX) is used as a nuclear fuel in some light water reactors and considered for future reactor generations. To gain insight into fuel restructuring, which occurs during the fuel lifetime as well as possible accident scenarios understanding of the thermodynamic and kinetic behavior is crucial. A comprehensive evaluation of thermo-kinetic properties is incorporated in a computational CALPHAD type model. The present DICTRA based model describes oxygen diffusion across the whole range of plutonium, uranium and oxygen compositions and temperatures by incorporating vacancy and interstitial migration pathways for oxygen. The self and chemical diffusion coefficients are assessed for the binary UO2 ± x and PuO2 - x systems and the description is extended to the ternary mixed oxide (U,Pu)O2 ± x by extrapolation. A simulation to validate the applicability of this model is considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.
In this study, we compute changes in the lattice parameters and elastic stiffness coefficients C ij of body-centered tetragonal (bct) Fe due to Al, B, C, Cu, Mn, Si, and N solutes. Solute strain misfit tensors determine changes in the lattice parameters as well as strain contributions to the changes in the C ij. We also compute chemical contributions to the changes in the C ij, and show that the sum of the strain and chemical contributions agree with more computationally expensive direct calculations that simultaneously incorporate both contributions. Octahedral interstitial solutes, with C being the most important addition inmore » steels, must be present to stabilize the bct phase over the body-centered cubic phase. We therefore compute the effects of interactions between interstitial C solutes and substitutional solutes on the bct lattice parameters and C ij for all possible solute configurations in the dilute limit, and thermally average the results to obtain effective changes in properties due to each solute. Finally, the computed data can be used to estimate solute-induced changes in mechanical properties such as strength and ductility, and can be directly incorporated into mesoscale simulations of multiphase steels to model solute effects on the bct martensite phase.« less
NASA Astrophysics Data System (ADS)
Spak, S.; Pooley, M.
2012-12-01
The next generation of coupled human and earth systems models promises immense potential and grand challenges as they transition toward new roles as core tools for defining and living within planetary boundaries. New frontiers in community model development include not only computational, organizational, and geophysical process questions, but also the twin objectives of more meaningfully integrating the human dimension and extending applicability to informing policy decisions on a range of new and interconnected issues. We approach these challenges by posing key policy questions that require more comprehensive coupled human and geophysical models, identify necessary model and organizational processes and outputs, and work backwards to determine design criteria in response to these needs. We find that modular community earth system model design must: * seamlessly scale in space (global to urban) and time (nowcasting to paleo-studies) and fully coupled on all component systems * automatically differentiate to provide complete coupled forward and adjoint models for sensitivity studies, optimization applications, and 4DVAR assimilation across Earth and human observing systems * incorporate diagnostic tools to quantify uncertainty in couplings, and in how human activity affects them * integrate accessible community development and application with JIT-compilation, cloud computing, game-oriented interfaces, and crowd-sourced problem-solving We outline accessible near-term objectives toward these goals, and describe attempts to incorporate these design objectives in recent pilot activities using atmosphere-land-ocean-biosphere-human models (WRF-Chem, IBIS, UrbanSim) at urban and regional scales for policy applications in climate, energy, and air quality.
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Mathematical description of complex chemical kinetics and application to CFD modeling codes
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1993-01-01
A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.
Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing
NASA Astrophysics Data System (ADS)
Kumar, Suhas; Strachan, John Paul; Williams, R. Stanley
2017-08-01
At present, machine learning systems use simplified neuron models that lack the rich nonlinear phenomena observed in biological systems, which display spatio-temporal cooperative dynamics. There is evidence that neurons operate in a regime called the edge of chaos that may be central to complexity, learning efficiency, adaptability and analogue (non-Boolean) computation in brains. Neural networks have exhibited enhanced computational complexity when operated at the edge of chaos, and networks of chaotic elements have been proposed for solving combinatorial or global optimization problems. Thus, a source of controllable chaotic behaviour that can be incorporated into a neural-inspired circuit may be an essential component of future computational systems. Such chaotic elements have been simulated using elaborate transistor circuits that simulate known equations of chaos, but an experimental realization of chaotic dynamics from a single scalable electronic device has been lacking. Here we describe niobium dioxide (NbO2) Mott memristors each less than 100 nanometres across that exhibit both a nonlinear-transport-driven current-controlled negative differential resistance and a Mott-transition-driven temperature-controlled negative differential resistance. Mott materials have a temperature-dependent metal-insulator transition that acts as an electronic switch, which introduces a history-dependent resistance into the device. We incorporate these memristors into a relaxation oscillator and observe a tunable range of periodic and chaotic self-oscillations. We show that the nonlinear current transport coupled with thermal fluctuations at the nanoscale generates chaotic oscillations. Such memristors could be useful in certain types of neural-inspired computation by introducing a pseudo-random signal that prevents global synchronization and could also assist in finding a global minimum during a constrained search. We specifically demonstrate that incorporating such memristors into the hardware of a Hopfield computing network can greatly improve the efficiency and accuracy of converging to a solution for computationally difficult problems.
Building Cognition: The Construction of Computational Representations for Scientific Discovery.
Chandrasekharan, Sanjay; Nersessian, Nancy J
2015-11-01
Novel computational representations, such as simulation models of complex systems and video games for scientific discovery (Foldit, EteRNA etc.), are dramatically changing the way discoveries emerge in science and engineering. The cognitive roles played by such computational representations in discovery are not well understood. We present a theoretical analysis of the cognitive roles such representations play, based on an ethnographic study of the building of computational models in a systems biology laboratory. Specifically, we focus on a case of model-building by an engineer that led to a remarkable discovery in basic bioscience. Accounting for such discoveries requires a distributed cognition (DC) analysis, as DC focuses on the roles played by external representations in cognitive processes. However, DC analyses by and large have not examined scientific discovery, and they mostly focus on memory offloading, particularly how the use of existing external representations changes the nature of cognitive tasks. In contrast, we study discovery processes and argue that discoveries emerge from the processes of building the computational representation. The building process integrates manipulations in imagination and in the representation, creating a coupled cognitive system of model and modeler, where the model is incorporated into the modeler's imagination. This account extends DC significantly, and we present some of the theoretical and application implications of this extended account. Copyright © 2014 Cognitive Science Society, Inc.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... Change Relating to Market-Maker Continuous Quoting Obligations February 15, 2013. Pursuant to Section 19... relating to Market-Maker continuous quoting obligations. The text of the proposed rule change is available... Trading System (the ``System'').\\14\\ Their system computations also factor in their market risk models...
The SMAP level 4 carbon product for monitoring ecosystem land-atmosphere CO2 exchange
USDA-ARS?s Scientific Manuscript database
The NASA Soil Moisture Active Passive (SMAP) mission Level 4 Carbon (L4C) product provides model estimates of Net Ecosystem CO2 exchange (NEE) incorporating SMAP soil moisture information. The L4C product includes NEE, computed as total ecosystem respiration less gross photosynthesis, at a daily ti...
Cognitive Process as a Basis for Intelligent Retrieval Systems Design.
ERIC Educational Resources Information Center
Chen, Hsinchun; Dhar, Vasant
1991-01-01
Two studies of the cognitive processes involved in online document-based information retrieval were conducted. These studies led to the development of five computational models of online document retrieval which were incorporated into the design of an "intelligent" document-based retrieval system. Both the system and the broader implications of…
The path for incorporating new alternative methods and technologies into quantitative chemical risk assessment poses a diverse set of scientific challenges. Some of these challenges include development of relevant and predictive test systems and computational models to integrate...
Transitioning to Blended Learning: Understanding Student and Faculty Perceptions
ERIC Educational Resources Information Center
Napier, Nannette P.; Dekhane, Sonal; Smith, Stella
2011-01-01
This paper describes the conversion of an introductory computing course to the blended learning model at a small, public liberal arts college. Blended learning significantly reduces face-to-face instruction by incorporating rich, online learning experiences. To assess the impact of blended learning on students, survey data was collected at the…
Active behavior of abdominal wall muscles: Experimental results and numerical model formulation.
Grasa, J; Sierra, M; Lauzeral, N; Muñoz, M J; Miana-Mena, F J; Calvo, B
2016-08-01
In the present study a computational finite element technique is proposed to simulate the mechanical response of muscles in the abdominal wall. This technique considers the active behavior of the tissue taking into account both collagen and muscle fiber directions. In an attempt to obtain the computational response as close as possible to real muscles, the parameters needed to adjust the mathematical formulation were determined from in vitro experimental tests. Experiments were conducted on male New Zealand White rabbits (2047±34g) and the active properties of three different muscles: Rectus Abdominis, External Oblique and multi-layered samples formed by three muscles (External Oblique, Internal Oblique, and Transversus Abdominis) were characterized. The parameters obtained for each muscle were incorporated into a finite strain formulation to simulate active behavior of muscles incorporating the anisotropy of the tissue. The results show the potential of the model to predict the anisotropic behavior of the tissue associated to fibers and how this influences on the strain, stress and generated force during an isometric contraction. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of an Efficient CFD Model for Nuclear Thermal Thrust Chamber Assembly Design
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed thermo-fluid environments and global characteristics of the internal ballistics for a hypothetical solid-core nuclear thermal thrust chamber assembly (NTTCA). Several numerical and multi-physics thermo-fluid models, such as real fluid, chemically reacting, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver as the underlying computational methodology. The numerical simulations of detailed thermo-fluid environment of a single flow element provide a mechanism to estimate the thermal stress and possible occurrence of the mid-section corrosion of the solid core. In addition, the numerical results of the detailed simulation were employed to fine tune the porosity model mimic the pressure drop and thermal load of the coolant flow through a single flow element. The use of the tuned porosity model enables an efficient simulation of the entire NTTCA system, and evaluating its performance during the design cycle.
Kee, Kerk F; Sparks, Lisa; Struppa, Daniele C; Mannucci, Mirco A; Damiano, Alberto
2016-01-01
By integrating the simplicial model of social aggregation with existing research on opinion leadership and diffusion networks, this article introduces the constructs of simplicial diffusers (mathematically defined as nodes embedded in simplexes; a simplex is a socially bonded cluster) and simplicial diffusing sets (mathematically defined as minimal covers of a simplicial complex; a simplicial complex is a social aggregation in which socially bonded clusters are embedded) to propose a strategic approach for information diffusion of cancer screenings as a health intervention on Facebook for community cancer prevention and control. This approach is novel in its incorporation of interpersonally bonded clusters, culturally distinct subgroups, and different united social entities that coexist within a larger community into a computational simulation to select sets of simplicial diffusers with the highest degree of information diffusion for health intervention dissemination. The unique contributions of the article also include seven propositions and five algorithmic steps for computationally modeling the simplicial model with Facebook data.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
Plis, Sergey M; Sarwate, Anand D; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R; Turner, Jessica A; Shoemaker, Jody M; Carter, Kim W; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and "closed" repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to "pooled-data" solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions.
Design and characterisation of a phased antenna array for intact breast hyperthermia.
Curto, Sergio; Garcia-Miquel, Aleix; Suh, Minyoung; Vidal, Neus; Lopez-Villegas, Jose M; Prakash, Punit
2018-05-01
Currently available hyperthermia technology is not well suited to treating cancer malignancies in the intact breast. This study investigates a microwave applicator incorporating multiple patch antennas, with the goal of facilitating controllable power deposition profiles for treating lesions at diverse locations within the intact breast. A 3D-computational model was implemented to assess power deposition profiles with 915 MHz applicators incorporating a hemispheric groundplane and configurations of 2, 4, 8, 12, 16 and 20 antennas. Hemispheric breast models of 90 mm and 150 mm diameter were considered, where cuboid target volumes of 10 mm edge length (1 cm 3 ) and 30 mm edge length (27 cm 3 ) were positioned at the centre of the breast, and also located 15 mm from the chest wall. The average power absorption (αPA) ratio expressed as the ratio of the PA in the target volume and in the full breast was evaluated. A 4-antenna proof-of-concept array was fabricated and experimentally evaluated. Computational models identified an optimal inter-antenna spacing of 22.5° along the applicator circumference. Applicators with 8 and 12 antennas excited with constant phase presented the highest αPA at centrally located and deep-seated targets, respectively. Experimental measurements with a 4-antenna proof-of-concept array illustrated the potential for electrically steering power deposition profiles by adjusting the relative phase of the signal at antenna inputs. Computational models and experimental results suggest that the proposed applicator may have potential for delivering conformal thermal therapy in the intact breast.
Plis, Sergey M.; Sarwate, Anand D.; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R.; Turner, Jessica A.; Shoemaker, Jody M.; Carter, Kim W.; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D.
2016-01-01
The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and “closed” repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to “pooled-data” solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions. PMID:27594820
Kuhn, Gerhard; Krammes, Gary S.; Beal, Vivian J.
2007-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs Utilities, the Colorado Water Conservation Board, and the El Paso County Water Authority, began a study in 2004 with the following objectives: (1) Apply a stream-aquifer model to Monument Creek, (2) use the results of the modeling to develop a transit-loss accounting program for Monument Creek, (3) revise an existing accounting program for Fountain Creek to easily incorporate ongoing and future changes in management of return flows of reusable water, and (4) integrate the two accounting programs into a single program and develop a Web-based interface to the integrated program that incorporates simple and reliable data entry that is automated to the fullest extent possible. This report describes the results of completing objectives (2), (3), and (4) of that study. The accounting program for Monument Creek was developed first by (1) using the existing accounting program for Fountain Creek as a prototype, (2) incorporating the transit-loss results from a stream-aquifer modeling analysis of Monument Creek, and (3) developing new output reports. The capabilities of the existing accounting program for Fountain Creek then were incorporated into the program for Monument Creek and the output reports were expanded to include Fountain Creek. A Web-based interface to the new transit-loss accounting program then was developed that provided automated data entry. An integrated system of 34 nodes and 33 subreaches was integrated by combining the independent node and subreach systems used in the previously completed stream-aquifer modeling studies for the Monument and Fountain Creek reaches. Important operational criteria that were implemented in the new transit-loss accounting program for Monument and Fountain Creeks included the following: (1) Retain all the reusable water-management capabilities incorporated into the existing accounting program for Fountain Creek; (2) enable daily accounting and transit-loss computations for a variable number of reusable return flows discharged into Monument Creek at selected locations; (3) enable diversion of all or a part of a reusable return flow at any selected node for purposes of storage in off-stream reservoirs or other similar types of reusable water management; (4) and provide flexibility in the accounting program to change the number of return-flow entities, the locations at which the return flows discharge into Monument or Fountain Creeks, or the locations to which the return flows are delivered. The primary component of the Web-based interface is a data-entry form that displays data stored in the accounting program input file; the data-entry form allows for entry and modification of new data, which then is rewritten to the input file. When the data-entry form is displayed, up-to-date discharge data for each station are automatically computed and entered on the data-entry form. Data for native return flows, reusable return flows, reusable return flow diversions, and native diversions also are entered automatically or manually, if needed. In computing the estimated quantities of reusable return flow and the associated transit losses, the accounting program uses two sets of computations. The first set of computations is made between any two adjacent streamflow-gaging stations (termed 'stream-segment loop'); the primary purpose of the stream-segment loop is to estimate the loss or gain in native discharge between the two adjacent streamflow-gaging stations. The second set of computations is made between any two adjacent nodes (termed 'subreach loop'); the actual transit-loss computations are made in the subreach loop, using the result from the stream-segment loop. The stream-segment loop is completed for a stream segment, and then the subreach loop is completed for each subreach within the segment. When the subreach loop is completed for all subreaches within a stream segment, the stream-segment loop is initiated for the ne
Multiphysics modeling of non-linear laser-matter interactions for optically active semiconductors
NASA Astrophysics Data System (ADS)
Kraczek, Brent; Kanp, Jaroslaw
Development of photonic devices for sensors and communications devices has been significantly enhanced by computational modeling. We present a new computational method for modelling laser propagation in optically-active semiconductors within the paraxial wave approximation (PWA). Light propagation is modeled using the Streamline-upwind/Petrov-Galerkin finite element method (FEM). Material response enters through the non-linear polarization, which serves as the right-hand side of the FEM calculation. Maxwell's equations for classical light propagation within the PWA can be written solely in terms of the electric field, producing a wave equation that is a form of the advection-diffusion-reaction equations (ADREs). This allows adaptation of the computational machinery developed for solving ADREs in fluid dynamics to light-propagation modeling. The non-linear polarization is incorporated using a flexible framework to enable the use of multiple methods for carrier-carrier interactions (e.g. relaxation-time-based or Monte Carlo) to enter through the non-linear polarization, as appropriate to the material type. We demonstrate using a simple carrier-carrier model approximating the response of GaN. Supported by ARL Materials Enterprise.
Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A
2008-01-01
Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526
Image-based models of cardiac structure in health and disease
Vadakkumpadan, Fijoy; Arevalo, Hermenegild; Prassl, Anton J.; Chen, Junjie; Kickinger, Ferdinand; Kohl, Peter; Plank, Gernot; Trayanova, Natalia
2010-01-01
Computational approaches to investigating the electromechanics of healthy and diseased hearts are becoming essential for the comprehensive understanding of cardiac function. In this article, we first present a brief review of existing image-based computational models of cardiac structure. We then provide a detailed explanation of a processing pipeline which we have recently developed for constructing realistic computational models of the heart from high resolution structural and diffusion tensor (DT) magnetic resonance (MR) images acquired ex vivo. The presentation of the pipeline incorporates a review of the methodologies that can be used to reconstruct models of cardiac structure. In this pipeline, the structural image is segmented to reconstruct the ventricles, normal myocardium, and infarct. A finite element mesh is generated from the segmented structural image, and fiber orientations are assigned to the elements based on DTMR data. The methods were applied to construct seven different models of healthy and diseased hearts. These models contain millions of elements, with spatial resolutions in the order of hundreds of microns, providing unprecedented detail in the representation of cardiac structure for simulation studies. PMID:20582162
Computational Modeling of 3D Tumor Growth and Angiogenesis for Chemotherapy Evaluation
Tang, Lei; van de Ven, Anne L.; Guo, Dongmin; Andasari, Vivi; Cristini, Vittorio; Li, King C.; Zhou, Xiaobo
2014-01-01
Solid tumors develop abnormally at spatial and temporal scales, giving rise to biophysical barriers that impact anti-tumor chemotherapy. This may increase the expenditure and time for conventional drug pharmacokinetic and pharmacodynamic studies. In order to facilitate drug discovery, we propose a mathematical model that couples three-dimensional tumor growth and angiogenesis to simulate tumor progression for chemotherapy evaluation. This application-oriented model incorporates complex dynamical processes including cell- and vascular-mediated interstitial pressure, mass transport, angiogenesis, cell proliferation, and vessel maturation to model tumor progression through multiple stages including tumor initiation, avascular growth, and transition from avascular to vascular growth. Compared to pure mechanistic models, the proposed empirical methods are not only easy to conduct but can provide realistic predictions and calculations. A series of computational simulations were conducted to demonstrate the advantages of the proposed comprehensive model. The computational simulation results suggest that solid tumor geometry is related to the interstitial pressure, such that tumors with high interstitial pressure are more likely to develop dendritic structures than those with low interstitial pressure. PMID:24404145
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
LESCARBEAU, ROLAND F.; AND OTHERS
A SUGGESTED POST-SECONDARY CURRICULUM GUIDE FOR ELECTRO-MECHANICAL TECHNOLOGY ORIENTED SPECIFICALLY TO THE COMPUTER AND BUSINESS MACHINE FIELDS WAS DEVELOPED BY A GROUP OF COOPERATING INSTITUTIONS, NOW INCORPORATED AS TECHNICAL EDUCATION CONSORTIUM, INCORPORATED. SPECIFIC NEEDS OF THE COMPUTER AND BUSINESS MACHINE INDUSTRY WERE DETERMINED FROM…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panaccione, Charles; Staab, Greg; Meuleman, Erik
ION has developed a mathematically driven model for a contacting device incorporating mass transfer, heat transfer, and computational fluid dynamics. This model is based upon a parametric structure for purposes of future commercialization. The most promising design from modeling was 3D printed and tested in a bench scale CO 2 capture unit and compared to commercially available structured packing tested in the same unit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johanna H Oxstrand; Katya L Le Blanc
The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts wemore » are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups, sharing procedures between fellow coworkers, the use of multiple procedures at once, etc. were considered. The model describes which affordances associated with paper based procedures should be transferred to computer-based procedures as well as what features should not be incorporated. The model also provides a means to identify what new features not present in paper based procedures need to be added to the computer-based procedures to further enhance performance. The next step is to use the requirements and specifications to develop concepts and prototypes of computer-based procedures. User tests and other data collection efforts will be conducted to ensure that the real issues with field procedures and their usage are being addressed and solved in the best manner possible. This paper describes the baseline study, the construction of the model of procedure use, and the requirements and specifications for computer-based procedures that were developed based on the model. It also addresses how the model and the insights gained from it were used to develop concepts and prototypes for computer based procedures.« less
Modeling near-road air quality using a computational fluid dynamics model, CFD-VIT-RIT.
Wang, Y Jason; Zhang, K Max
2009-10-15
It is well recognized that dilution is an important mechanism governing the near-road air pollutant concentrations. In this paper, we aim to advance our understanding of turbulent mixing mechanisms on and near roadways using computation fluid dynamics. Turbulent mixing mechanisms can be classified into three categories according to their origins: vehicle-induced turbulence (VIT), road-induced turbulence (RIT), and atmospheric boundary layer turbulence. RIT includes the turbulence generated by road embankment, road surface thermal effects, and roadside structures. Both VIT and RIT are affected by the roadway designs. We incorporate the detailed treatment of VIT and RIT into the CFD (namely CFD-VIT-RIT) and apply the model in simulating the spatial gradients of carbon monoxide near two major highways with different traffic mix and roadway configurations. The modeling results are compared to the field measurements and those from CALINE4 and CFD without considering VIT and RIT. We demonstrate that the incorporation of VIT and RIT considerably improves the modeling predictions, especially on vertical gradients and seasonal variations of carbon monoxide. Our study implies that roadway design can significantly influence the near-road air pollution. Thus we recommend that mitigating near-road air pollution through roadway designs be considered in the air quality and transportation management In addition, thanks to the rigorous representation of turbulent mixing mechanisms, CFD-VIT-RIT can become valuable tools in the roadway designs process.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Jamjoom, Faris Z; Kim, Do-Gyoon; Lee, Damian J; McGlumphy, Edwin A; Yilmaz, Burak
2018-02-05
Effects of length and location of the edentulous area on the accuracy of prosthetic treatment plan incorporation into cone-beam computed tomography (CBCT) scans has not been investigated. To evaluate the effect of length and location of the edentulous area on the accuracy of prosthetic treatment plan incorporation into CBCT scans using different methods. Direct digital scans of a completely dentate master model with removable radiopaque teeth were made using an intraoral scanner, and digital scans of stone duplicates of the master model were made using a laboratory scanner. Specific teeth were removed to simulate different clinical situations and their CBCT scans were made. Surface scans were registered onto the CBCT scans. Radiographic templates for each clinical situation were also fabricated and used during CBCT scans of the master models. Using metrology software, three-dimensional (3D) deviation was measured on standard tesselation language (STL) files created from the CBCT scans against an STL file of the master model created from a CBCT scan. Statistical analysis was done using the MIXED procedure in a statistical software and Tukey HSD test (α =.05). The interaction between location and method was significant (P = .009). Location had no significant effect on registration methods (P > .05), but on the radiographic templates (P = .011). Length of the edentulous area did not have any significant effect (P > .05). Accuracy of digital image registration methods was similar and higher than that of radiographic templates in all clinical situations. Tooth-bound radiographic templates were significantly more accurate than the free-end templates. The results of this study suggest using image registration instead of radiographic templates when planning dental implants, particularly in free-end situations. © 2018 Wiley Periodicals, Inc.
Computational Fluid Dynamics and Experimental Characterization of the Pediatric Pump-Lung.
Wu, Zhongjun J; Gellman, Barry; Zhang, Tao; Taskin, M Ertan; Dasse, Kurt A; Griffith, Bartley P
2011-12-01
The pediatric pump-lung (PediPL) is a miniaturized integrated pediatric pump-oxygenator specifically designed for cardiac or cardiopulmonary support for patients weighing 5-20 kg to allow mobility and extended use for 30 days. The PediPL incorporates a magnetically levitated impeller with uniquely configured hollow fiber membranes into a single unit capable of performing both pumping and gas exchange. A combined computational and experimental study was conducted to characterize the functional and hemocompatibility performances of this newly developed device. The three-dimensional flow features of the PediPL and its hemolytic characteristics were analyzed using computational fluid dynamics based modeling. The oxygen exchange was modeled based on a convection-diffusion-reaction process. The hollow fiber membranes were modeled as a porous medium which incorporates the flow resistance in the bundle by an added momentum sink term. The pumping function was evaluated for the required range of operating conditions (0.5-2.5 L/min and 1000-3000 rpm). The blood damage potentials were further analyzed in terms of flow and shear stress fields, and the calculations of hemolysis index. In parallel, the hydraulic pump performance, oxygen transfer and hemolysis level were quantified experimentally. Based on the computational and experimental results, the PediPL device is found to be functional to provide necessary oxygen transfer and blood pumping requirements for the pediatric patients. Smooth blood flow characteristics and low blood damage potential were observed in the entire device. The in-vitro tests further confirmed that the PediPL can provide adequate blood pumping and oxygen transfer over the range of intended operating conditions with acceptable hemolytic performance. The rated flow rate for oxygenation is 2.5 L/min. The normalized index of hemolysis is 0.065 g/100L at 1.0 L/min and 3000 rpm.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
NASA Astrophysics Data System (ADS)
King, Thomas Steven
A hybrid gravity modeling method is developed to investigate the structure of sedimentary mass bodies. The method incorporates as constraints surficial basement/sediment contacts and topography of a mass target with a quadratically varying density distribution. The inverse modeling utilizes a genetic algorithm (GA) to scan a wide range of the solution space to determine initial models and the Marquardt-Levenberg (ML) nonlinear inversion to determine final models that meet pre-assigned misfit criteria, thus providing an estimate of model variability and uncertainty. The surface modeling technique modifies Delaunay triangulation by allowing individual facets to be manually constructed and non-convex boundaries to be incorporated into the triangulation scheme. The sedimentary body is represented by a set of uneven prisms and edge elements, comprised of tetrahedrons, capped by polyhedrons. Each underlying prism and edge element's top surface is located by determining its point of tangency with the overlying terrain. The remaining overlying mass is gravitationally evaluated and subtracted from the observation points. Inversion then proceeds in the usual sense, but on an irregular tiered surface with each element's density defined relative to their top surface. Efficiency is particularly important due to the large number of facets evaluated for surface representations and the many repeated element evaluations of the stochastic GA. The gravitation of prisms, triangular faceted polygons, and tetrahedrons can be formulated in different ways, either mathematically or by physical approximations, each having distinct characteristics, such as evaluation time, accuracy over various spatial ranges, and computational singularities. A decision tree or switching routine is constructed for each element by combining these characteristics into a single cohesive package that optimizes the computation for accuracy and speed while avoiding singularities. The GA incorporates a subspace technique and parameter dependency to maintain model smoothness during development, thus minimizing creating nonphysical models. The stochastic GA explores the solution space, producing a broad range of unbiased initial models, while the ML inversion is deterministic and thus quickly converges to the final model. The combination allows many solution models to be determined from the same observed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Tractable Experiment Design via Mathematical Surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less
NASA Technical Reports Server (NTRS)
Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.
1992-01-01
Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, David V.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas; Tucker, Susan L.
2014-11-15
Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation wasmore » used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78.8% (±3.9% SD) classification reproducibility in terms of OS, LRC, and FFDM, respectively. Conclusions: Pretreatment tumor texture may provide prognostic information beyond that obtained from CPFs. Models incorporating feature reproducibility achieved classification rates of ∼80%. External validation would be required to establish texture as a prognostic factor.« less
Automatic inference of multicellular regulatory networks using informative priors.
Sun, Xiaoyun; Hong, Pengyu
2009-01-01
To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.
Identity in agent-based models : modeling dynamic multiscale social processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozik, J.; Sallach, D. L.; Macal, C. M.
Identity-related issues play central roles in many current events, including those involving factional politics, sectarianism, and tribal conflicts. Two popular models from the computational-social-science (CSS) literature - the Threat Anticipation Program and SharedID models - incorporate notions of identity (individual and collective) and processes of identity formation. A multiscale conceptual framework that extends some ideas presented in these models and draws other capabilities from the broader CSS literature is useful in modeling the formation of political identities. The dynamic, multiscale processes that constitute and transform social identities can be mapped to expressive structures of the framework
Exhibit D modular design attitude control system study
NASA Technical Reports Server (NTRS)
Chichester, F.
1984-01-01
A dynamically equivalent four body approximation of the NASTRAN finite element model supplied for hybrid deployable truss to support the digital computer simulation of the ten body model of the flexible space platform that incorporates the four body truss model were investigated. Coefficients for sensitivity of state variables of the linearized model of the three axes rotational dynamics of the prototype flexible spacecraft were generated with respect to the model's parameters. Software changes required to accommodate addition of another rigid body to the five body model of the rotational dynamics of the prototype flexible spacecraft were evaluated.
Development of a realistic stress analysis for fatigue analysis of notched composite laminates
NASA Technical Reports Server (NTRS)
Humphreys, E. A.; Rosen, B. W.
1979-01-01
A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.
Incorporating time and spatial-temporal reasoning into situation management
NASA Astrophysics Data System (ADS)
Jakobson, Gabriel
2010-04-01
Spatio-temporal reasoning plays a significant role in situation management that is performed by intelligent agents (human or machine) by affecting how the situations are recognized, interpreted, acted upon or predicted. Many definitions and formalisms for the notion of spatio-temporal reasoning have emerged in various research fields including psychology, economics and computer science (computational linguistics, data management, control theory, artificial intelligence and others). In this paper we examine the role of spatio-temporal reasoning in situation management, particularly how to resolve situations that are described by using spatio-temporal relations among events and situations. We discuss a model for describing context sensitive temporal relations and show have the model can be extended for spatial relations.
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
NASA Astrophysics Data System (ADS)
Park, Harold
2016-04-01
Dielectric elastomers are a class of soft, active materials that have recently gained significant interest due to the fact that they can be electrostatically actuated into undergoing extremely large deformations. An ongoing challenge has been the development of robust and accurate computational models for elastomers, particularly those that can capture electromechanical instabilities that limit the performance of elastomers such as creasing, wrinkling, and snap-through. I discuss in this work a recently developed finite element model for elastomers that is dynamic, nonlinear, and fully electromechanically coupled. The model also significantly alleviates volumetric locking due that arises due to the incompressible nature of the elastomers, and incorporates viscoelasticity within a finite deformation framework. Numerical examples are shown that demonstrate the performance of the proposed method in capturing electromechanical instabilities (snap-through, creasing, cratering, wrinkling) that have been observed experimentally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsman, Devin W.; Aukema, Brian H.; McDowell, Nate G.
Phenology models are becoming increasingly important tools to accurately predict how climate change will impact the life histories of organisms. We propose a class of integral projection phenology models derived from stochastic individual-based models of insect development and demography.Our derivation, which is based on the rate-summation concept, produces integral projection models that capture the effect of phenotypic rate variability on insect phenology, but which are typically more computationally frugal than equivalent individual-based phenology models. We demonstrate our approach using a temperature-dependent model of the demography of the mountain pine beetle (Dendroctonus ponderosae Hopkins), an insect that kills mature pine trees.more » This work illustrates how a wide range of stochastic phenology models can be reformulated as integral projection models. Due to their computational efficiency, these integral projection models are suitable for deployment in large-scale simulations, such as studies of altered pest distributions under climate change.« less
Mariner Mars 1971 science operational support equipment
NASA Technical Reports Server (NTRS)
1971-01-01
The Mariner Mars 1971 science operational support equipment (SOSE) was developed to support the checkout of the proof test model and flight spacecraft. The test objectives of the SOSE and how these objectives were implemented are discussed. Attention is focused on the computer portion of the SOSE, since incorporation of a computer in ground checkout equipment represents a major departure from the support equipment concepts previously used. A functional description of the major hardware elements contained in the SOSE is also included, along with the operational performance of the SOSE during spacecraft testing.
Thermodynamic equilibrium-air correlations for flowfield applications
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.
1981-01-01
Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.
Voxel inversion of airborne electromagnetic data
NASA Astrophysics Data System (ADS)
Auken, E.; Fiandaca, G.; Kirkegaard, C.; Vest Christiansen, A.
2013-12-01
Inversion of electromagnetic data usually refers to a model space being linked to the actual observation points, and for airborne surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space. This means that incorporating the geophysical data into the geological and/or hydrological modelling grids involves a spatial relocation of the models, which in itself is a subtle process where valuable information is easily lost. Also the integration of prior information, e.g. from boreholes, is difficult when the observation points do not coincide with the position of the prior information, as well as the joint inversion of airborne and ground-based surveys. We developed a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models, for easier incorporation of prior information and for straightforward integration of different data types in joint inversion. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the properties is computed everywhere by means of an interpolation function f (e.g. inverse distance or kriging). The position of the nodes is fixed during the inversion and is chosen to sample the soil taking into account topography and inversion resolution. Given this definition of the voxel model space, both 1D and 2D/3D forward responses can be computed. The 1D forward responses are computed as follows: A) a 1D model subdivision, in terms of model thicknesses and direction of the "virtual" horizontal stratification, is defined for each 1D data set. For EM soundings the "virtual" horizontal stratification is set up parallel to the topography at the sounding position. B) the "virtual" 1D models are constructed by interpolating the soil properties in the medium point of the "virtual" layers. For 2D/3D forward responses the algorithm operates similarly, simply filling the 2D/3D meshes of the forward responses by computing the interpolation values in the centres of the mesh cells. The new definition of the voxel model space allows for incorporating straightforwardly the geophysical information into geological and/or hydrological models, just by using for defining the geophysical model space a voxel (hydro)geological grid. This simplify also the propagation of the uncertainty of geophysical parameters into the (hydro)geological models. Furthermore, prior information from boreholes, like resistivity logs, can be applied directly to the voxel model space, even if the borehole positions do not coincide with the actual observation points. In fact, the prior information is constrained to the model parameters through the interpolation function at the borehole locations. The presented algorithm is a further development of the AarhusInv program package developed at Aarhus University (formerly em1dinv), which manages both large scale AEM surveys and ground-based data. This work has been carried out as part of the HyGEM project, supported by the Danish Council of Strategic Research under grant number DSF 11-116763.
Modeling bioluminescent photon transport in tissue based on Radiosity-diffusion model
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Pu; Tian, Jie; Zhang, Bo; Han, Dong; Yang, Xin
2010-03-01
Bioluminescence tomography (BLT) is one of the most important non-invasive optical molecular imaging modalities. The model for the bioluminescent photon propagation plays a significant role in the bioluminescence tomography study. Due to the high computational efficiency, diffusion approximation (DA) is generally applied in the bioluminescence tomography. But the diffusion equation is valid only in highly scattering and weakly absorbing regions and fails in non-scattering or low-scattering tissues, such as a cyst in the breast, the cerebrospinal fluid (CSF) layer of the brain and synovial fluid layer in the joints. A hybrid Radiosity-diffusion model is proposed for dealing with the non-scattering regions within diffusing domains in this paper. This hybrid method incorporates a priori information of the geometry of non-scattering regions, which can be acquired by magnetic resonance imaging (MRI) or x-ray computed tomography (CT). Then the model is implemented using a finite element method (FEM) to ensure the high computational efficiency. Finally, we demonstrate that the method is comparable with Mont Carlo (MC) method which is regarded as a 'gold standard' for photon transportation simulation.
Perfusion kinetics in human brain tumor with DCE-MRI derived model and CFD analysis.
Bhandari, A; Bansal, A; Singh, A; Sinha, N
2017-07-05
Cancer is one of the leading causes of death all over the world. Among the strategies that are used for cancer treatment, the effectiveness of chemotherapy is often hindered by factors such as irregular and non-uniform uptake of drugs inside tumor. Thus, accurate prediction of drug transport and deposition inside tumor is crucial for increasing the effectiveness of chemotherapeutic treatment. In this study, a computational model of human brain tumor is developed that incorporates dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) data into a voxelized porous media model. The model takes into account realistic transport and perfusion kinetics parameters together with realistic heterogeneous tumor vasculature and accurate arterial input function (AIF), which makes it patient specific. The computational results for interstitial fluid pressure (IFP), interstitial fluid velocity (IFV) and tracer concentration show good agreement with the experimental results. The computational model can be extended further for predicting the deposition of chemotherapeutic drugs in tumor environment as well as selection of the best chemotherapeutic drug for a specific patient. Copyright © 2017 Elsevier Ltd. All rights reserved.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
NASA Astrophysics Data System (ADS)
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.
Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui
2017-07-15
New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier Inc.
Development of a three dimensional numerical water quality model for continental shelf applications
NASA Technical Reports Server (NTRS)
Spaulding, M.; Hunter, D.
1975-01-01
A model to predict the distribution of water quality parameters in three dimensions was developed. The mass transport equation was solved using a non-dimensional vertical axis and an alternating-direction-implicit finite difference technique. The reaction kinetics of the constituents were incorporated into a matrix method which permits computation of the interactions of multiple constituents. Methods for the computation of dispersion coefficients and coliform bacteria decay rates were determined. Numerical investigations of dispersive and dissipative effects showed that the three-dimensional model performs as predicted by analysis of simpler cases. The model was then applied to a two dimensional vertically averaged tidal dynamics model for the Providence River. It was also extended to a steady state application by replacing the time step with an iteration sequence. This modification was verified by comparison to analytical solutions and applied to a river confluence situation.
NASA Astrophysics Data System (ADS)
Wang, Yu; Liu, Qun
2013-01-01
Surplus-production models are widely used in fish stock assessment and fisheries management due to their simplicity and lower data demands than age-structured models such as Virtual Population Analysis. The CEDA (catch-effort data analysis) and ASPIC (a surplus-production model incorporating covariates) computer packages are data-fitting or parameter estimation tools that have been developed to analyze catch-and-effort data using non-equilibrium surplus production models. We applied CEDA and ASPIC to the hairtail ( Trichiurus japonicus) fishery in the East China Sea. Both packages produced robust results and yielded similar estimates. In CEDA, the Schaefer surplus production model with log-normal error assumption produced results close to those of ASPIC. CEDA is sensitive to the choice of initial proportion, while ASPIC is not. However, CEDA produced higher R 2 values than ASPIC.
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Monitoring and Modeling Performance of Communications in Computational Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Le, Thuy T.
2003-01-01
Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.
Training University Faculty To Integrate Hypermedia into the Teacher Training Curriculum.
ERIC Educational Resources Information Center
Tucker, S. A.; And Others
Funded under the Apple Model Program for the Integration of Computers in the Preparation of Educators, the University of South Alabama began a 3-year project in 1989 to train faculty in its College of Education to incorporate hypermedia into their curriculum. HyperCard was selected as a course presentation and development tool because of its…
ERIC Educational Resources Information Center
Baird, Irene C.; Towns, Kathryn
PROBE (Potential Reentry Opportunities in Business and Education), a program conducted in Harrisburg and Lebanon, Pennsylvania, incorporated technological training with effective communication skills preparation for single female welfare parents. Goals of the program were to provide 20 single-parent welfare women with marketable computer and…
1994-07-01
incorporate the Bell-La Padula rules for implementing the DoD security policy. The policy from which we begin here is the organization’s operational...security policy, which assumes the Bell-La Padula model and assigns the required security variables to elements of the system. A way to ensure a
A computer model for the 30S ribosome subunit.
Kuntz, I D; Crippen, G M
1980-01-01
We describe a computer-generated model for the locations of the 21 proteins of the 30S subunit of the E. coli ribosome. The model uses a new method of incorporating experimental measurements based on a mathematical technique called distance geometry. In this paper, we use data from two sources: immunoelectron microscopy and neutron-scattering studies. The data are generally self-consistent and lead to a set of relatively well-defined structures in which individual protein coordinates differ by approximately 20 A from one structure to another. Two important features of this calculation are the use of extended proteins rather than just the centers of mass, and the ability to confine the protein locations within an arbitrary boundary surface so that only solutions with an approximate 30S "shape" are permitted. PMID:7020786
NASA Technical Reports Server (NTRS)
Warner, James E.; Zubair, Mohammad; Ranjan, Desh
2017-01-01
This work investigates novel approaches to probabilistic damage diagnosis that utilize surrogate modeling and high performance computing (HPC) to achieve substantial computational speedup. Motivated by Digital Twin, a structural health management (SHM) paradigm that integrates vehicle-specific characteristics with continual in-situ damage diagnosis and prognosis, the methods studied herein yield near real-time damage assessments that could enable monitoring of a vehicle's health while it is operating (i.e. online SHM). High-fidelity modeling and uncertainty quantification (UQ), both critical to Digital Twin, are incorporated using finite element method simulations and Bayesian inference, respectively. The crux of the proposed Bayesian diagnosis methods, however, is the reformulation of the numerical sampling algorithms (e.g. Markov chain Monte Carlo) used to generate the resulting probabilistic damage estimates. To this end, three distinct methods are demonstrated for rapid sampling that utilize surrogate modeling and exploit various degrees of parallelism for leveraging HPC. The accuracy and computational efficiency of the methods are compared on the problem of strain-based crack identification in thin plates. While each approach has inherent problem-specific strengths and weaknesses, all approaches are shown to provide accurate probabilistic damage diagnoses and several orders of magnitude computational speedup relative to a baseline Bayesian diagnosis implementation.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291
Incorporating Hydroepidemiology into the Epidemia Malaria Early Warning System
NASA Astrophysics Data System (ADS)
Wimberly, M. C.; Merkord, C. L.; Henebry, G. M.; Senay, G. B.
2014-12-01
Early warning of the timing and locations of malaria epidemics can facilitate the targeting of resources for prevention and emergency response. In response to this need, we are developing the Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) computer system. EPIDEMIA incorporates software for capturing, processing, and integrating environmental and epidemiological data from multiple sources; data assimilation techniques that continually update models and forecasts; and a web-based interface that makes the resulting information available to public health decision makers. The system will enable forecasts that incorporate lagged responses to environmental risk factors as well as information about recent trends in malaria cases. Because the egg, larval, and pupal stages of mosquito development occur in aquatic habitats, information about the spatial and temporal distributions of stagnant water bodies is critical for modeling malaria risk. Potential sources of hydrological data include satellite-derived rainfall estimates, evapotranspiration (ET) calculated using a simplified surface energy balance model, and estimates of soil moisture and fractional water cover from passive microwave radiometry. We used partial least squares regression to analyze and visualize seasonal patterns of these variables in relation to malaria cases using data from 49 districts in the Amhara region of Ethiopia. Seasonal patterns of rainfall were strongly associated with the incidence and seasonality of malaria across the region, and model fit was improved by the addition of remotely-sensed ET and soil moisture variables. The results highlight the importance of remotely-sensed hydrological data for modeling malaria risk in this region and emphasize the value of an ensemble approach that utilizes multiple sources of information about precipitation and land surface wetness. These variables will be incorporated into the forecasting models at the core of the EPIDEMIA system, and. future model development will involve a cycle of continuous forecasting, accuracy assessment, and model refinement.
Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot
2017-10-01
Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.
NASA Astrophysics Data System (ADS)
Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot
2017-10-01
Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Abhishek
One of the essential requirements of biomolecular modeling is an accurate description of water as a solvent. The challenge is to make this description computationally facile - reasonably fast, simple, robust and easy to incorporate into existing software packages, yet accurate. The most rigorous procedure to model the effect of aqueous solvent is to explicitly model every water molecule in the system. For many practical applications, this approach is computationally too intense, as the number of required water atoms is on an average at least one order of magnitude larger than the number of atoms of the molecule of interest. Implicit solvent models, in which solvent molecules are replaced by a continuous dielectric, have become a popular alternative to explicit solvent methods. However, implicit solvation models often lack various microscopic details which are crucial for accuracy. One such missing effect that is currently missing from popular implicit models is the so called effect of charge hydration asymmetry (CHA). The missing effect of charge hydration asymmetry - the asymmetric response of water upon the sign of solute charge - manifests a characteristic, strong dependence of solvation free energies on the sign of solute charge. Here, we incorporate this missing effect into the continuum solvation framework via the conceptually simplest Born equation and also in the generalized Born model. We identify the key electric multipole moments of model water molecules critical for the various degrees of CHA effect observed in studies based on molecular dynamics simulations using different rigid water models. We then use this gained insight to incorporate this effect first into the Born model and then into the generalized Born model. The proposed framework significantly improves accuracy of the hydration free energy estimates tested on a comprehensive set of varied molecular solutes - monovalent and divalent ions, small drug-like molecules, charged and uncharged amino acid dipeptides, and small proteins. We finally develop a methodology to resolve the issue with unacceptably large uncertainty that stems from a variety of fundamental and technical difficulties in experimental quantification of CHA from charged solutes. Using the proposed corrections in the continuum framework, we untangle the charge-asymmetric response of water from its symmetric response, and further circumvent the difficulties by extracting accurate estimate propensity of water to cause CHA from accurate experimental hydration free energies of neutral polar molecules. We show that the asymmetry in water's response is strong, about 50% of the symmetric response.
Galerkin finite element scheme for magnetostrictive structures and composites
NASA Astrophysics Data System (ADS)
Kannan, Kidambi Srinivasan
The ever increasing-role of magnetostrictives in actuation and sensing applications is an indication of their importance in the emerging field of smart structures technology. As newer, and more complex, applications are developed, there is a growing need for a reliable computational tool that can effectively address the magneto-mechanical interactions and other nonlinearities in these materials and in structures incorporating them. This thesis presents a continuum level quasi-static, three-dimensional finite element computational scheme for modeling the nonlinear behavior of bulk magnetostrictive materials and particulate magnetostrictive composites. Models for magnetostriction must deal with two sources of nonlinearities-nonlinear body forces/moments in equilibrium equations governing magneto-mechanical interactions in deformable and magnetized bodies; and nonlinear coupled magneto-mechanical constitutive models for the material of interest. In the present work, classical differential formulations for nonlinear magneto-mechanical interactions are recast in integral form using the weighted-residual method. A discretized finite element form is obtained by applying the Galerkin technique. The finite element formulation is based upon three dimensional eight-noded (isoparametric) brick element interpolation functions and magnetostatic infinite elements at the boundary. Two alternative possibilities are explored for establishing the nonlinear incremental constitutive model-characterization in terms of magnetic field or in terms of magnetization. The former methodology is the one most commonly used in the literature. In this work, a detailed comparative study of both methodologies is carried out. The computational scheme is validated, qualitatively and quantitatively, against experimental measurements published in the literature on structures incorporating the magnetostrictive material Terfenol-D. The influence of nonlinear body forces and body moments of magnetic origin, on the response of magnetostrictive structures to complex mechanical and magnetic loading conditions, is carefully examined. While monolithic magnetostrictive materials have been commercially-available since the late eighties, attention in the smart structures research community has recently focussed upon building and using magnetostrictive particulate composite structures for conventional actuation applications and novel sensing methodologies in structural health monitoring. A particulate magnetostrictive composite element has been developed in the present work to model such structures. This composite element incorporates interactions between magnetostrictive particles by combining a numerical micromechanical analysis based on magneto-mechanical Green's functions, with a homogenization scheme based upon the Mori-Tanaka approach. This element has been applied to the simulation of particulate actuators and sensors reported in the literature. Simulation results are compared to experimental data for validation purposes. The computational schemes developed, for bulk materials and for composites, are expected to be of great value to researchers and designers of novel applications based on magnetostrictives.
NASA Technical Reports Server (NTRS)
Ghaffari, Farhad; Biedron, Robert T.; Luckring, James M.
2002-01-01
Turbulent Navier-Stokes computational results are presented for an advanced diamond wing semispan model at low-speed, high-lift conditions. The numerical results are obtained in support of a wind-tunnel test that was conducted in the National Transonic Facility at the NASA Langley Research Center. The model incorporated a generic fuselage and was mounted on the tunnel sidewall using a constant-width non-metric standoff. The computations were performed at to a nominal approach and landing flow conditions.The computed high-lift flow characteristics for the model in both the tunnel and in free-air environment are presented. The computed wing pressure distributions agreed well with the measured data and they both indicated a small effect due to the tunnel wall interference effects. However, the wall interference effects were found to be relatively more pronounced in the measured and the computed lift, drag and pitching moment. Although the magnitudes of the computed forces and moment were slightly off compared to the measured data, the increments due the wall interference effects were predicted reasonably well. The numerical results are also presented on the combined effects of the tunnel sidewall boundary layer and the standoff geometry on the fuselage forebody pressure distributions and the resulting impact on the configuration longitudinal aerodynamic characteristics.
Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.
Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino
2016-12-01
Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.
The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models
Sanders, C.L.
1995-01-01
Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Development of a model to compute the extension of life supporting zones for Earth-like exoplanets.
Neubauer, David; Vrtala, Aron; Leitner, Johannes J; Firneis, Maria G; Hitzenberger, Regina
2011-12-01
A radiative convective model to calculate the width and the location of the life supporting zone (LSZ) for different, alternative solvents (i.e. other than water) is presented. This model can be applied to the atmospheres of the terrestrial planets in the solar system as well as (hypothetical, Earth-like) terrestrial exoplanets. Cloud droplet formation and growth are investigated using a cloud parcel model. Clouds can be incorporated into the radiative transfer calculations. Test runs for Earth, Mars and Titan show a good agreement of model results with observations.
Energy-economic policy modeling
NASA Astrophysics Data System (ADS)
Sanstad, Alan H.
2018-01-01
Computational models based on economic principles and methods are powerful tools for understanding and analyzing problems in energy and the environment and for designing policies to address them. Among their other features, some current models of this type incorporate information on sustainable energy technologies and can be used to examine their potential role in addressing the problem of global climate change. The underlying principles and the characteristics of the models are summarized, and examples of this class of model and their applications are presented. Modeling epistemology and related issues are discussed, as well as critiques of the models. The paper concludes with remarks on the evolution of the models and possibilities for their continued development.
The influence of placental metabolism on fatty acid transfer to the fetus[S
Perazzolo, Simone; Hirschmugl, Birgit; Wadsack, Christian; Desoye, Gernot; Lewis, Rohan M.; Sengers, Bram G.
2017-01-01
The factors determining fatty acid transfer across the placenta are not fully understood. This study used a combined experimental and computational modeling approach to explore placental transfer of nonesterified fatty acids and identify the rate-determining processes. Isolated perfused human placenta was used to study the uptake and transfer of 13C-fatty acids and the release of endogenous fatty acids. Only 6.2 ± 0.8% of the maternal 13C-fatty acids taken up by the placenta was delivered to the fetal circulation. Of the unlabeled fatty acids released from endogenous lipid pools, 78 ± 5% was recovered in the maternal circulation and 22 ± 5% in the fetal circulation. Computational modeling indicated that fatty acid metabolism was necessary to explain the discrepancy between uptake and delivery of 13C-fatty acids. Without metabolism, the model overpredicts the fetal delivery of 13C-fatty acids 15-fold. Metabolic rate was predicted to be the main determinant of uptake from the maternal circulation. The microvillous membrane had a greater fatty acid transport capacity than the basal membrane. This study suggests that incorporation of fatty acids into placental lipid pools may modulate their transfer to the fetus. Future work needs to focus on the factors regulating fatty acid incorporation into lipid pools. PMID:27913585
Thermal Analysis of the PediaFlow pediatric ventricular assist device.
Gardiner, Jeffrey M; Wu, Jingchun; Noh, Myounggyu D; Antaki, James F; Snyder, Trevor A; Paden, David B; Paden, Brad E
2007-01-01
Accurate modeling of heat dissipation in pediatric intracorporeal devices is crucial in avoiding tissue and blood thermotrauma. Thermal models of new Maglev ventricular assist device (VAD) concepts for the PediaFlow VAD are developed by incorporating empirical heat transfer equations with thermal finite element analysis (FEA). The models assume three main sources of waste heat generation: copper motor windings, active magnetic thrust bearing windings, and eddy currents generated within the titanium housing due to the two-pole motor. Waste heat leaves the pump by convection into blood passing through the pump and conduction through surrounding tissue. Coefficients of convection are calculated and assigned locally along fluid path surfaces of the three-dimensional pump housing model. FEA thermal analysis yields a three-dimensional temperature distribution for each of the three candidate pump models. Thermal impedances from the motor and thrust bearing windings to tissue and blood contacting surfaces are estimated based on maximum temperature rise at respective surfaces. A new updated model for the chosen pump topology is created incorporating computational fluid dynamics with empirical fluid and heat transfer equations. This model represents the final geometry of the first generation prototype, incorporates eddy current heating, and has 60 discrete convection regions. Thermal analysis is performed at nominal and maximum flow rates, and temperature distributions are plotted. Results suggest that the pump will not exceed a temperature rise of 2 degrees C during normal operation.
Stone, David E; Haswell, Elizabeth S; Sztul, Elizabeth
2017-01-01
In classical Cell Biology, fundamental cellular processes are revealed empirically, one experiment at a time. While this approach has been enormously fruitful, our understanding of cells is far from complete. In fact, the more we know, the more keenly we perceive our ignorance of the profoundly complex and dynamic molecular systems that underlie cell structure and function. Thus, it has become apparent to many cell biologists that experimentation alone is unlikely to yield major new paradigms, and that empiricism must be combined with theory and computational approaches to yield major new discoveries. To facilitate those discoveries, three workshops will convene annually for one day in three successive summers (2017-2019) to promote the use of computational modeling by cell biologists currently unconvinced of its utility or unsure how to apply it. The first of these workshops was held at the University of Illinois, Chicago in July 2017. Organized to facilitate interactions between traditional cell biologists and computational modelers, it provided a unique educational opportunity: a primer on how cell biologists with little or no relevant experience can incorporate computational modeling into their research. Here, we report on the workshop and describe how it addressed key issues that cell biologists face when considering modeling including: (1) Is my project appropriate for modeling? (2) What kind of data do I need to model my process? (3) How do I find a modeler to help me in integrating modeling approaches into my work? And, perhaps most importantly, (4) why should I bother?
NASA Astrophysics Data System (ADS)
Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo
2014-09-01
Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.
Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P; Russell, MacKenzie R; Jones, Robert M; King, Matt; Betterton, Eric A; Sáez, A Eduardo
2014-09-01
Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition.
Stovern, Michael; Felix, Omar; Csavina, Janae; Rine, Kyle P.; Russell, MacKenzie R.; Jones, Robert M.; King, Matt; Betterton, Eric A.; Sáez, A. Eduardo
2014-01-01
Mining operations are potential sources of airborne particulate metal and metalloid contaminants through both direct smelter emissions and wind erosion of mine tailings. The warmer, drier conditions predicted for the Southwestern US by climate models may make contaminated atmospheric dust and aerosols increasingly important, due to potential deleterious effects on human health and ecology. Dust emissions and dispersion of dust and aerosol from the Iron King Mine tailings in Dewey-Humboldt, Arizona, a Superfund site, are currently being investigated through in situ field measurements and computational fluid dynamics modeling. These tailings are heavily contaminated with lead and arsenic. Using a computational fluid dynamics model, we model dust transport from the mine tailings to the surrounding region. The model includes gaseous plume dispersion to simulate the transport of the fine aerosols, while individual particle transport is used to track the trajectories of larger particles and to monitor their deposition locations. In order to improve the accuracy of the dust transport simulations, both regional topographical features and local weather patterns have been incorporated into the model simulations. Results show that local topography and wind velocity profiles are the major factors that control deposition. PMID:25621085
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
Center for Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostadin, Damevski
A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less
Transonic Flow Field Analysis for Wing-Fuselage Configurations
NASA Technical Reports Server (NTRS)
Boppe, C. W.
1980-01-01
A computational method for simulating the aerodynamics of wing-fuselage configurations at transonic speeds is developed. The finite difference scheme is characterized by a multiple embedded mesh system coupled with a modified or extended small disturbance flow equation. This approach permits a high degree of computational resolution in addition to coordinate system flexibility for treating complex realistic aircraft shapes. To augment the analysis method and permit applications to a wide range of practical engineering design problems, an arbitrary fuselage geometry modeling system is incorporated as well as methodology for computing wing viscous effects. Configuration drag is broken down into its friction, wave, and lift induced components. Typical computed results for isolated bodies, isolated wings, and wing-body combinations are presented. The results are correlated with experimental data. A computer code which employs this methodology is described.
Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio
2010-01-01
In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.
The development of computer networks: First results from a microeconomic model
NASA Astrophysics Data System (ADS)
Maier, Gunther; Kaufmann, Alexander
Computer networks like the Internet are gaining importance in social and economic life. The accelerating pace of the adoption of network technologies for business purposes is a rather recent phenomenon. Many applications are still in the early, sometimes even experimental, phase. Nevertheless, it seems to be certain that networks will change the socioeconomic structures we know today. This is the background for our special interest in the development of networks, in the role of spatial factors influencing the formation of networks, and consequences of networks on spatial structures, and in the role of externalities. This paper discusses a simple economic model - based on a microeconomic calculus - that incorporates the main factors that generate the growth of computer networks. The paper provides analytic results about the generation of computer networks. The paper discusses (1) under what conditions economic factors will initiate the process of network formation, (2) the relationship between individual and social evaluation, and (3) the efficiency of a network that is generated based on economic mechanisms.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
[Virtual reality in ophthalmological education].
Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J
2001-04-01
We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
2000-01-01
A finite-difference computer program (COSIM) has been written which models the one-dimensional, diffusional transport associated with high-temperature oxidation and interdiffusion of overlay-coated substrates. The program predicts concentration profiles for up to three elements in the coating and substrate after various oxidation exposures. Surface recession due to solute loss is also predicted. Ternary cross terms and concentration-dependent diffusion coefficients are taken into account. The program also incorporates a previously-developed oxide growth and spalling model to simulate either isothermal or cyclic oxidation exposures. In addition to predicting concentration profiles after various oxidation exposures, the program can also be used to predict coating fife based on a concentration dependent failure criterion (e.g., surface solute content drops to two percent). The computer code, written in an extension of FORTRAN 77, employs numerous subroutines to make the program flexible and easily modifiable to other coating oxidation problems.
The New Zealand gravimetric quasigeoid model 2017 that incorporates nationwide airborne gravimetry
NASA Astrophysics Data System (ADS)
McCubbine, J. C.; Amos, M. J.; Tontini, F. C.; Smith, E.; Winefied, R.; Stagpoole, V.; Featherstone, W. E.
2017-12-01
A one arc-minute resolution gravimetric quasigeoid model has been computed for New Zealand, covering the region 25°S -60°S and 160°E -170°W . It was calculated by Wong and Gore modified Stokes integration using the remove-compute-restore technique with the EIGEN-6C4 global gravity model as the reference field. The gridded gravity data used for the computation consisted of 40,677 land gravity observations, satellite altimetry-derived marine gravity anomalies, historical shipborne marine gravity observations and, importantly, approximately one million new airborne gravity observations. The airborne data were collected with the specific intention of reinforcing the shortcomings of the existing data in areas of rough topography inaccessible to land gravimetry and in coastal areas where shipborne gravimetry cannot be collected and altimeter-derived gravity anomalies are generally poor. The new quasigeoid has a nominal precision of ± 48 mm on comparison with GPS-levelling data, which is approximately 14 mm less than its predecessor NZGeoid09.
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
NASA Technical Reports Server (NTRS)
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
Why is a computational framework for motivational and metacognitive control needed?
NASA Astrophysics Data System (ADS)
Sun, Ron
2018-01-01
This paper discusses, in the context of computational modelling and simulation of cognition, the relevance of deeper structures in the control of behaviour. Such deeper structures include motivational control of behaviour, which provides underlying causes for actions, and also metacognitive control, which provides higher-order processes for monitoring and regulation. It is argued that such deeper structures are important and thus cannot be ignored in computational cognitive architectures. A general framework based on the Clarion cognitive architecture is outlined that emphasises the interaction amongst action selection, motivation, and metacognition. The upshot is that it is necessary to incorporate all essential processes; short of that, the understanding of cognition can only be incomplete.
Biomaterial science meets computational biology.
Hutmacher, Dietmar W; Little, J Paige; Pettet, Graeme J; Loessner, Daniela
2015-05-01
There is a pressing need for a predictive tool capable of revealing a holistic understanding of fundamental elements in the normal and pathological cell physiology of organoids in order to decipher the mechanoresponse of cells. Therefore, the integration of a systems bioengineering approach into a validated mathematical model is necessary to develop a new simulation tool. This tool can only be innovative by combining biomaterials science with computational biology. Systems-level and multi-scale experimental data are incorporated into a single framework, thus representing both single cells and collective cell behaviour. Such a computational platform needs to be validated in order to discover key mechano-biological factors associated with cell-cell and cell-niche interactions.
High performance MRI simulations of motion on multi-GPU systems
2014-01-01
Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation of realistic motion models, such as cardiac motion, respiratory motion and flow may benefit the design and optimization of existing or new MR pulse sequences, protocols and algorithms, which examine motion related MR applications. PMID:24996972
Toward an in-situ analytics and diagnostics framework for earth system models
NASA Astrophysics Data System (ADS)
Anantharaj, Valentine; Wolf, Matthew; Rasch, Philip; Klasky, Scott; Williams, Dean; Jacob, Rob; Ma, Po-Lun; Kuo, Kwo-Sen
2017-04-01
The development roadmaps for many earth system models (ESM) aim for a globally cloud-resolving model targeting the pre-exascale and exascale systems of the future. The ESMs will also incorporate more complex physics, chemistry and biology - thereby vastly increasing the fidelity of the information content simulated by the model. We will then be faced with an unprecedented volume of simulation output that would need to be processed and analyzed concurrently in order to derive the valuable scientific results. We are already at this threshold with our current generation of ESMs at higher resolution simulations. Currently, the nominal I/O throughput in the Community Earth System Model (CESM) via Parallel IO (PIO) library is around 100 MB/s. If we look at the high frequency I/O requirements, it would require an additional 1 GB / simulated hour, translating to roughly 4 mins wallclock / simulated-day => 24.33 wallclock hours / simulated-model-year => 1,752,000 core-hours of charge per simulated-model-year on the Titan supercomputer at the Oak Ridge Leadership Computing Facility. There is also a pending need for 3X more volume of simulation output . Meanwhile, many ESMs use instrument simulators to run forward models to compare model simulations against satellite and ground-based instruments, such as radars and radiometers. The CFMIP Observation Simulator Package (COSP) is used in CESM as well as the Accelerated Climate Model for Energy (ACME), one of the ESMs specifically targeting current and emerging leadership-class computing platforms These simulators can be computationally expensive, accounting for as much as 30% of the computational cost. Hence the data are often written to output files that are then used for offline calculations. Again, the I/O bottleneck becomes a limitation. Detection and attribution studies also use large volume of data for pattern recognition and feature extraction to analyze weather and climate phenomenon such as tropical cyclones, atmospheric rivers, blizzards, etc. It is evident that ESMs need an in-situ framework to decouple the diagnostics and analytics from the prognostics and physics computations of the models so that the diagnostic computations could be performed concurrently without limiting model throughput. We are designing a science-driven online analytics framework for earth system models. Our approach is to adopt several data workflow technologies, such as the Adaptable IO System (ADIOS), being developed under the U.S. Exascale Computing Project (ECP) and integrate these to allow for extreme performance IO, in situ workflow integration, science-driven analytics and visualization all in a easy to use computational framework. This will allow science teams to write data 100-1000 times faster and seamlessly move from post processing the output for validation and verification purposes to performing these calculations in situ. We can easily and knowledgeably envision a near-term future where earth system models like ACME and CESM will have to address not only the challenges of the volume of data but also need to consider the velocity of the data. The earth system model of the future in the exascale era, as they incorporate more complex physics at higher resolutions, will be able to analyze more simulation content without having to compromise targeted model throughput.
Extended MHD modeling of nonlinear instabilities in fusion and space plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Germaschewski, Kai
A number of different sub-projects where pursued within this DOE early career project. The primary focus was on using fully nonlinear, curvilinear, extended MHD simulations of instabilities with applications to fusion and space plasmas. In particular, we performed comprehensive studies of the dynamics of the double tearing mode in different regimes and confi gurations, using Cartesian and cyclindrical geometry and investigating both linear and non-linear dynamics. In addition to traditional extended MHD involving Hall term and electron pressure gradient, we also employed a new multi-fluid moment model, which shows great promise to incorporate kinetic effects, in particular off-diagonal elements ofmore » the pressure tensor, in a fluid model, which is naturally computationally much cheaper than fully kinetic particle or Vlasov simulations. We used our Vlasov code for detailed studies of how weak collisions effect plasma echos. In addition, we have played an important supporting role working with the PPPL theory group around Will Fox and Amitava Bhattacharjee on providing simulation support for HED plasma experiments performed at high-powered laser facilities like OMEGA-EP in Rochester, NY. This project has support a great number of computational advances in our fluid and kinetic plasma models, and has been crucial to winning multiple INCITE computer time awards that supported our computational modeling.« less
NASA Technical Reports Server (NTRS)
Neitzel, G. P.
1993-01-01
This project was concerned with the determination of conditions of guaranteed stability and instability for thermocapillary convection in a model of the float-zone crystal-growth process. This model, referred to as the half-zone, was studied extensively, both experimentally and theoretically. Our own earlier research determined, using energy-stability theory, sufficient conditions for stability to axisymmetric disturbances. Nearly all results computed were for the case of a liquid with Prandtl Number Pr = 1. Attempts to compute cases for higher Prandtl numbers to allow comparison with the experimental results of other researchers were unsuccessful, but indicated that the condition guaranteeing stability against axisymmetric disturbances would be a value of the Marangoni number (Ma), significantly higher than that at which oscillatory convection was observed experimentally. Thus, additional results were needed to round out the stability picture for this model problem. The research performed under this grant consisted of the following: (1) computation of energy-stability limits for non-axisymmetric disturbances; (2) computation of linear-stability limits for axisymmetric and non-axisymmetric disturbances; (3) numerical simulation of the basic state for half- and full-zones with a deformable free surface; and (4) incorporation of radiation heat transfer into a model energy-stability problem. Each of these is summarized briefly below.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nascimento, Daniel R.; DePrince, A. Eugene, E-mail: deprince@chem.fsu.edu
2015-12-07
We present a combined cavity quantum electrodynamics/ab initio electronic structure approach for simulating plasmon-molecule interactions in the time domain. The simple Jaynes-Cummings-type model Hamiltonian typically utilized in such simulations is replaced with one in which the molecular component of the coupled system is treated in a fully ab initio way, resulting in a computationally efficient description of general plasmon-molecule interactions. Mutual polarization effects are easily incorporated within a standard ground-state Hartree-Fock computation, and time-dependent simulations carry the same formal computational scaling as real-time time-dependent Hartree-Fock theory. As a proof of principle, we apply this generalized method to the emergence ofmore » a Fano-like resonance in coupled molecule-plasmon systems; this feature is quite sensitive to the nanoparticle-molecule separation and the orientation of the molecule relative to the polarization of the external electric field.« less
Analysis of rotor vibratory loads using higher harmonic pitch control
NASA Technical Reports Server (NTRS)
Quackenbush, Todd R.; Bliss, Donald B.; Boschitsch, Alexander H.; Wachspress, Daniel A.
1992-01-01
Experimental studies of isolated rotors in forward flight have indicated that higher harmonic pitch control can reduce rotor noise. These tests also show that such pitch inputs can generate substantial vibratory loads. The modification is summarized of the RotorCRAFT (Computation of Rotor Aerodynamics in Forward flighT) analysis of isolated rotors to study the vibratory loading generated by high frequency pitch inputs. The original RotorCRAFT code was developed for use in the computation of such loading, and uses a highly refined rotor wake model to facilitate this task. The extended version of RotorCRAFT incorporates a variety of new features including: arbitrary periodic root pitch control; computation of blade stresses and hub loads; improved modeling of near wake unsteady effects; and preliminary implementation of a coupled prediction of rotor airloads and noise. Correlation studies are carried out with existing blade stress and vibratory hub load data to assess the performance of the extended code.
Optimizing Integrated Terminal Airspace Operations Under Uncertainty
NASA Technical Reports Server (NTRS)
Bosson, Christabelle; Xue, Min; Zelinski, Shannon
2014-01-01
In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.
A Machine Learning Approach to Student Modeling.
1984-05-01
machine learning , and describe ACN, a student modeling system that incorporates this approach. This system begins with a set of overly general rules, which it uses to search a problem space until it arrives at the same answer as the student. The ACM computer program then uses the solution path it has discovered to determine positive and negative instances of its initial rules, and employs a discrimination learning mechanism to place additional conditions on these rules. The revised rules will reproduce the solution path without search, and constitute a cognitive model of
Efficient calculation of atomic rate coefficients in dense plasmas
NASA Astrophysics Data System (ADS)
Aslanyan, Valentin; Tallents, Greg J.
2017-03-01
Modelling electron statistics in a cold, dense plasma by the Fermi-Dirac distribution leads to complications in the calculations of atomic rate coefficients. The Pauli exclusion principle slows down the rate of collisions as electrons must find unoccupied quantum states and adds a further computational cost. Methods to calculate these coefficients by direct numerical integration with a high degree of parallelism are presented. This degree of optimization allows the effects of degeneracy to be incorporated into a time-dependent collisional-radiative model. Example results from such a model are presented.
Chartier, Sylvain; Proulx, Robert
2005-11-01
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.
In Praise of Numerical Computation
NASA Astrophysics Data System (ADS)
Yap, Chee K.
Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.
CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores
NASA Astrophysics Data System (ADS)
Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.
2015-12-01
CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.
Computational representation of the aponeuroses as NURBS surfaces in 3D musculoskeletal models.
Wu, Florence T H; Ng-Thow-Hing, Victor; Singh, Karan; Agur, Anne M; McKee, Nancy H
2007-11-01
Computational musculoskeletal (MSK) models - 3D graphics-based models that accurately simulate the anatomical architecture and/or the biomechanical behaviour of organ systems consisting of skeletal muscles, tendons, ligaments, cartilage and bones - are valued biomedical tools, with applications ranging from pathological diagnosis to surgical planning. However, current MSK models are often limited by their oversimplifications in anatomical geometries, sometimes lacking discrete representations of connective tissue components entirely, which ultimately affect their accuracy in biomechanical simulation. In particular, the aponeuroses - the flattened fibrous connective sheets connecting muscle fibres to tendons - have never been geometrically modeled. The initiative was thus to extend Anatomy3D - a previously developed software bundle for reconstructing muscle fibre architecture - to incorporate aponeurosis-modeling capacity. Two different algorithms for aponeurosis reconstruction were written in the MEL scripting language of the animation software Maya 6.0, using its NURBS (non-uniform rational B-splines) modeling functionality for aponeurosis surface representation. Both algorithms were validated qualitatively against anatomical and functional criteria.
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2004-01-01
This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.
Automated Assistance for Designing Active Magnetic Bearings
NASA Technical Reports Server (NTRS)
Imlach, Joseph
2008-01-01
MagBear12 is a computer code that assists in the design of radial, heteropolar active magnetic bearings (AMBs). MagBear12 was developed to help in designing the system described in "Advanced Active-Magnetic-Bearing Thrust-Measurement System". Beyond this initial application, MagBear12 is expected to be useful for designing AMBs for a variety of rotating machinery. This program incorporates design rules and governing equations that are also implemented in other, proprietary design software used by AMB manufacturers. In addition, this program incorporates an advanced unpublished fringing-magnetic-field model that increases accuracy beyond that offered by the other AMB-design software.
Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data
Pleydell, David R.J.; Chrétien, Stéphane
2009-01-01
A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331
Finding the target sites of RNA-binding proteins
Li, Xiao; Kazan, Hilal; Lipshitz, Howard D; Morris, Quaid D
2014-01-01
RNA–protein interactions differ from DNA–protein interactions because of the central role of RNA secondary structure. Some RNA-binding domains (RBDs) recognize their target sites mainly by their shape and geometry and others are sequence-specific but are sensitive to secondary structure context. A number of small- and large-scale experimental approaches have been developed to measure RNAs associated in vitro and in vivo with RNA-binding proteins (RBPs). Generalizing outside of the experimental conditions tested by these assays requires computational motif finding. Often RBP motif finding is done by adapting DNA motif finding methods; but modeling secondary structure context leads to better recovery of RBP-binding preferences. Genome-wide assessment of mRNA secondary structure has recently become possible, but these data must be combined with computational predictions of secondary structure before they add value in predicting in vivo binding. There are two main approaches to incorporating structural information into motif models: supplementing primary sequence motif models with preferred secondary structure contexts (e.g., MEMERIS and RNAcontext) and directly modeling secondary structure recognized by the RBP using stochastic context-free grammars (e.g., CMfinder and RNApromo). The former better reconstruct known binding preferences for sequence-specific RBPs but are not suitable for modeling RBPs that recognize shape and geometry of RNAs. Future work in RBP motif finding should incorporate interactions between multiple RBDs and multiple RBPs in binding to RNA. WIREs RNA 2014, 5:111–130. doi: 10.1002/wrna.1201 PMID:24217996
A potential-energy scaling model to simulate the initial stages of thin-film growth
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Outlaw, R. A.; Walker, G. H.
1983-01-01
A solid on solid (SOS) Monte Carlo computer simulation employing a potential energy scaling technique was used to model the initial stages of thin film growth. The model monitors variations in the vertical interaction potential that occur due to the arrival or departure of selected adatoms or impurities at all sites in the 400 sq. ft. array. Boltzmann ordered statistics are used to simulate fluctuations in vibrational energy at each site in the array, and the resulting site energy is compared with threshold levels of possible atomic events. In addition to adsorption, desorption, and surface migration, adatom incorporation and diffusion of a substrate atom to the surface are also included. The lateral interaction of nearest, second nearest, and third nearest neighbors is also considered. A series of computer experiments are conducted to illustrate the behavior of the model.
Computational Cosmology: From the Early Universe to the Large Scale Structure.
Anninos, Peter
2001-01-01
In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.
Computational Cosmology: from the Early Universe to the Large Scale Structure.
Anninos, Peter
1998-01-01
In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Modeling of Flow Transition Using an Intermittency Transport Equation
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.
1999-01-01
A new transport equation for intermittency factor is proposed to model transitional flows. The intermittent behavior of the transitional flows is incorporated into the computations by modifying the eddy viscosity, mu(sub t), obtainable from a turbulence model, with the intermittency factor, gamma: mu(sub t, sup *) = gamma.mu(sub t). In this paper, Menter's SST model (Menter, 1994) is employed to compute mu(sub t) and other turbulent quantities. The proposed intermittency transport equation can be considered as a blending of two models - Steelant and Dick (1996) and Cho and Chung (1992). The former was proposed for near-wall flows and was designed to reproduce the streamwise variation of the intermittency factor in the transition zone following Dhawan and Narasimha correlation (Dhawan and Narasimha, 1958) and the latter was proposed for free shear flows and was used to provide a realistic cross-stream variation of the intermittency profile. The new model was used to predict the T3 series experiments assembled by Savill (1993a, 1993b) including flows with different freestream turbulence intensities and two pressure-gradient cases. For all test cases good agreements between the computed results and the experimental data are observed.
A Low Mach Number Model for Moist Atmospheric Flows
Duarte, Max; Almgren, Ann S.; Bell, John B.
2015-04-01
A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less
A Low Mach Number Model for Moist Atmospheric Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duarte, Max; Almgren, Ann S.; Bell, John B.
A low Mach number model for moist atmospheric flows is introduced that accurately incorporates reversible moist processes in flows whose features of interest occur on advective rather than acoustic time scales. Total water is used as a prognostic variable, so that water vapor and liquid water are diagnostically recovered as needed from an exact Clausius–Clapeyron formula for moist thermodynamics. Low Mach number models can be computationally more efficient than a fully compressible model, but the low Mach number formulation introduces additional mathematical and computational complexity because of the divergence constraint imposed on the velocity field. Here in this paper, latentmore » heat release is accounted for in the source term of the constraint by estimating the rate of phase change based on the time variation of saturated water vapor subject to the thermodynamic equilibrium constraint. Finally, the authors numerically assess the validity of the low Mach number approximation for moist atmospheric flows by contrasting the low Mach number solution to reference solutions computed with a fully compressible formulation for a variety of test problems.« less
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
ERIC Educational Resources Information Center
Ault, Melinda Jones; Baggerman, Melanie A.; Horn, Channon K.
2017-01-01
This study used a multiple probe (conditions) design across behaviors to investigate the effects of an app for the tablet computer to teach spelling of academic content words to four students with developmental disabilities. The app delivered instruction using a model-lead-test format and students typed on the on-screen keyboard. The study also…
ERIC Educational Resources Information Center
Oner, Diler
2008-01-01
In this paper, I review both mathematics education and CSCL literature and discuss how we can better take advantage of CSCL tools for developing mathematical proof skills. I introduce a model of proof in school mathematics that incorporates both empirical and deductive ways of knowing. I argue that two major forces have given rise to this…
Milestone report TCTP application to the SSME hydrogen system analysis
NASA Technical Reports Server (NTRS)
Richards, J. S.
1975-01-01
The Transient Cryogen Transfer Computer Program (TCTP) developed and verified for LOX systems by analyses of Skylab S-1B stage loading data from John F. Kennedy Space Center launches was extended to include hydrogen as the working fluid. The feasibility of incorporating TCTP into the space shuttle main engine dynamic model was studied. The program applications are documented.
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
Numerical Simulations of Single Flow Element in a Nuclear Thermal Thrust Chamber
NASA Technical Reports Server (NTRS)
Cheng, Gary; Ito, Yasushi; Ross, Doug; Chen, Yen-Sen; Wang, Ten-See
2007-01-01
The objective of this effort is to develop an efficient and accurate computational methodology to predict both detailed and global thermo-fluid environments of a single now element in a hypothetical solid-core nuclear thermal thrust chamber assembly, Several numerical and multi-physics thermo-fluid models, such as chemical reactions, turbulence, conjugate heat transfer, porosity, and power generation, were incorporated into an unstructured-grid, pressure-based computational fluid dynamics solver. The numerical simulations of a single now element provide a detailed thermo-fluid environment for thermal stress estimation and insight for possible occurrence of mid-section corrosion. In addition, detailed conjugate heat transfer simulations were employed to develop the porosity models for efficient pressure drop and thermal load calculations.
Remote control missile model test
NASA Technical Reports Server (NTRS)
Allen, Jerry M.; Shaw, David S.; Sawyer, Wallace C.
1989-01-01
An extremely large, systematic, axisymmetric body/tail fin data base was gathered through tests of an innovative missile model design which is described herein. These data were originally obtained for incorporation into a missile aerodynamics code based on engineering methods (Program MISSILE3), but can also be used as diagnostic test cases for developing computational methods because of the individual-fin data included in the data base. Detailed analysis of four sample cases from these data are presented to illustrate interesting individual-fin force and moment trends. These samples quantitatively show how bow shock, fin orientation, fin deflection, and body vortices can produce strong, unusual, and computationally challenging effects on individual fin loads. Comparisons between these data and calculations from the SWINT Euler code are also presented.
NASA Technical Reports Server (NTRS)
Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.
Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis
2009-04-01
Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.
EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...
The Impact of Internet-Based Instruction on Teacher Education: The "Paradigm Shift."
ERIC Educational Resources Information Center
Lan, Jiang JoAnn
This study incorporated Internet-based instruction into two education technology courses for preservice teachers. One was a required, undergraduate, beginning-level educational computing course. The other was a graduate, advanced-level computing course. The experiment incorporated Internet-based instruction into course delivery in order to create…
The Use of Blackboard in Computer Information Systems Courses.
ERIC Educational Resources Information Center
Figueroa, Sandy; Huie, Carol
This paper focuses on the rationale for incorporating Blackboard, a Web-authoring software package, as the knowledge construction tool in computer information system courses. The authors illustrate previous strategies they incorporated in their classes, and they present their uses of Blackboard. They point out their reactions as teachers, and the…
Designing and Introducing Ethical Dilemmas into Computer-Based Business Simulations
ERIC Educational Resources Information Center
Schumann, Paul L.; Scott, Timothy W.; Anderson, Philip H.
2006-01-01
This article makes two contributions to the teaching of business ethics literature. First, it describes the steps involved in developing effective ethical dilemmas to incorporate into a computer-based business simulation. Second, it illustrates these steps by presenting two ethical dilemmas that an instructor can incorporate into any business…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Cheng-Hsien; Department of Water Resources and Environmental Engineering, Tamkang University, New Taipei City 25137, Taiwan; Low, Ying Min, E-mail: ceelowym@nus.edu.sg
2016-05-15
Sediment transport is fundamentally a two-phase phenomenon involving fluid and sediments; however, many existing numerical models are one-phase approaches, which are unable to capture the complex fluid-particle and inter-particle interactions. In the last decade, two-phase models have gained traction; however, there are still many limitations in these models. For example, several existing two-phase models are confined to one-dimensional problems; in addition, the existing two-dimensional models simulate only the region outside the sand bed. This paper develops a new three-dimensional two-phase model for simulating sediment transport in the sheet flow condition, incorporating recently published rheological characteristics of sediments. The enduring-contact, inertial,more » and fluid viscosity effects are considered in determining sediment pressure and stresses, enabling the model to be applicable to a wide range of particle Reynolds number. A k − ε turbulence model is adopted to compute the Reynolds stresses. In addition, a novel numerical scheme is proposed, thus avoiding numerical instability caused by high sediment concentration and allowing the sediment dynamics to be computed both within and outside the sand bed. The present model is applied to two classical problems, namely, sheet flow and scour under a pipeline with favorable results. For sheet flow, the computed velocity is consistent with measured data reported in the literature. For pipeline scour, the computed scour rate beneath the pipeline agrees with previous experimental observations. However, the present model is unable to capture vortex shedding; consequently, the sediment deposition behind the pipeline is overestimated. Sensitivity analyses reveal that model parameters associated with turbulence have strong influence on the computed results.« less
AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*
Bruch, Elizabeth; Atwell, Jon
2014-01-01
Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351
Space environment and lunar surface processes
NASA Technical Reports Server (NTRS)
Comstock, G. M.
1979-01-01
The development of a general rock/soil model capable of simulating in a self consistent manner the mechanical and exposure history of an assemblage of solid and loose material from submicron to planetary size scales, applicable to lunar and other space exposed planetary surfaces is discussed. The model was incorporated into a computer code called MESS.2 (model for the evolution of space exposed surfaces). MESS.2, which represents a considerable increase in sophistication and scope over previous soil and rock surface models, is described. The capabilities of previous models for near surface soil and rock surfaces are compared with the rock/soil model, MESS.2.
Integrated Workforce Modeling System
NASA Technical Reports Server (NTRS)
Moynihan, Gary P.
2000-01-01
There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.
Finite element model for brittle fracture and fragmentation
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin; ...
2016-06-01
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Finite element model for brittle fracture and fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wei; Delaney, Tristan J.; Jiao, Xiangmin
A new computational model for brittle fracture and fragmentation has been developed based on finite element analysis of non-linear elasticity equations. The proposed model propagates the cracks by splitting the mesh nodes alongside the most over-strained edges based on the principal direction of strain tensor. To prevent elements from overlapping and folding under large deformations, robust geometrical constraints using the method of Lagrange multipliers have been incorporated. In conclusion, the model has been applied to 2D simulations of the formation and propagation of cracks in brittle materials, and the fracture and fragmentation of stretched and compressed materials.
Micromechanics based simulation of ductile fracture in structural steels
NASA Astrophysics Data System (ADS)
Yellavajjala, Ravi Kiran
The broader aim of this research is to develop fundamental understanding of ductile fracture process in structural steels, propose robust computational models to quantify the associated damage, and provide numerical tools to simplify the implementation of these computational models into general finite element framework. Mechanical testing on different geometries of test specimens made of ASTM A992 steels is conducted to experimentally characterize the ductile fracture at different stress states under monotonic and ultra-low cycle fatigue (ULCF) loading. Scanning electron microscopy studies of the fractured surfaces is conducted to decipher the underlying microscopic damage mechanisms that cause fracture in ASTM A992 steels. Detailed micromechanical analyses for monotonic and cyclic loading are conducted to understand the influence of stress triaxiality and Lode parameter on the void growth phase of ductile fracture. Based on monotonic analyses, an uncoupled micromechanical void growth model is proposed to predict ductile fracture. This model is then incorporated in to finite element program as a weakly coupled model to simulate the loss of load carrying capacity in the post microvoid coalescence regime for high triaxialities. Based on the cyclic analyses, an uncoupled micromechanics based cyclic void growth model is developed to predict the ULCF life of ASTM A992 steels subjected to high stress triaxialities. Furthermore, a computational fracture locus for ASTM A992 steels is developed and incorporated in to finite element program as an uncoupled ductile fracture model. This model can be used to predict the ductile fracture initiation under monotonic loading in a wide range of triaxiality and Lode parameters. Finally, a coupled microvoid elongation and dilation based continuum damage model is proposed, implemented, calibrated and validated. This model is capable of simulating the local softening caused by the various phases of ductile fracture process under monotonic loading for a wide range of stress states. Novel differentiation procedures based on complex analyses along with existing finite difference methods and automatic differentiation are extended using perturbation techniques to evaluate tensor derivatives. These tensor differentiation techniques are then used to automate nonlinear constitutive models into implicit finite element framework. Finally, the efficiency of these automation procedures is demonstrated using benchmark problems.
From QSAR to QSIIR: Searching for Enhanced Computational Toxicology Models
Zhu, Hao
2017-01-01
Quantitative Structure Activity Relationship (QSAR) is the most frequently used modeling approach to explore the dependency of biological, toxicological, or other types of activities/properties of chemicals on their molecular features. In the past two decades, QSAR modeling has been used extensively in drug discovery process. However, the predictive models resulted from QSAR studies have limited use for chemical risk assessment, especially for animal and human toxicity evaluations, due to the low predictivity of new compounds. To develop enhanced toxicity models with independently validated external prediction power, novel modeling protocols were pursued by computational toxicologists based on rapidly increasing toxicity testing data in recent years. This chapter reviews the recent effort in our laboratory to incorporate the biological testing results as descriptors in the toxicity modeling process. This effort extended the concept of QSAR to Quantitative Structure In vitro-In vivo Relationship (QSIIR). The QSIIR study examples provided in this chapter indicate that the QSIIR models that based on the hybrid (biological and chemical) descriptors are indeed superior to the conventional QSAR models that only based on chemical descriptors for several animal toxicity endpoints. We believe that the applications introduced in this review will be of interest and value to researchers working in the field of computational drug discovery and environmental chemical risk assessment. PMID:23086837
The effect of topography on pyroclastic flow mobility
NASA Astrophysics Data System (ADS)
Ogburn, S. E.; Calder, E. S.
2010-12-01
Pyroclastic flows are among the most destructive volcanic phenomena. Hazard mitigation depends upon accurate forecasting of possible flow paths, often using computational models. Two main metrics have been proposed to describe the mobility of pyroclastic flows. The Heim coefficient, height-dropped/run-out (H/L), exhibits an inverse relationship with flow volume. This coefficient corresponds to the coefficient of friction and informs computational models that use Coulomb friction laws. Another mobility measure states that with constant shear stress, planimetric area is proportional to the flow volume raised to the 2/3 power (A∝V^(2/3)). This relationship is incorporated in models using constant shear stress instead of constant friction, and used directly by some empirical models. Pyroclastic flows from Soufriere Hills Volcano, Montserrat; Unzen, Japan; Colima, Mexico; and Augustine, Alaska are well described by these metrics. However, flows in specific valleys exhibit differences in mobility. This study investigates the effect of topography on pyroclastic flow mobility, as measured by the above mentioned mobility metrics. Valley width, depth, and cross-sectional area all influence flow mobility. Investigating the appropriateness of these mobility measures, as well as the computational models they inform, indicates certain circumstances under which each model performs optimally. Knowing which conditions call for which models allows for better model selection or model weighting, and therefore, more realistic hazard predictions.
NASA Technical Reports Server (NTRS)
Wooden, Diane H.; Lindsay, Sean S.; Harker, David; Woodward, Charles; Kelley, Michael S.; Kolokolova, Ludmilla
2015-01-01
Porous aggregate grains are commonly found in cometary dust samples and are needed to model cometary IR spectral energy distributions (SEDs). Models for thermal emissions from comets require two forms of silicates: amorphous and crystalline. The dominant crystal resonances observed in comet SEDs are from Forsterite (Mg2SiO4). The mass fractions that are crystalline span a large range from 0.0 < or = fcrystal < or = 0.74. Radial transport models that predict the enrichment of the outer disk (>25 AU at 1E6 yr) by inner disk materials (crystals) are challenged to yield the highend-range of cometary crystal mass fractions. However, in current thermal models, Forsterite crystals are not incorporated into larger aggregate grains but instead only are considered as discrete crystals. A complicating factor is that Forsterite crystals with rectangular shapes better fit the observed spectral resonances in wavelength (11.0-11.15 microns, 16, 19, 23.5, 27, and 33 microns), feature asymmetry and relative height (Lindley et al. 2013) than spherically or elliptically shaped crystals. We present DDA-DDSCAT computations of IR absorptivities (Qabs) of 3 micron-radii porous aggregates with 0.13 < or = fcrystal < or = 0.35 and with polyhedral-shaped Forsterite crystals. We can produce crystal resonances with similar appearance to the observed resonances of comet Hale- Bopp. Also, a lower mass fraction of crystals in aggregates can produce the same spectral contrast as a higher mass fraction of discrete crystals; the 11micron and 23 micron crystalline resonances appear amplified when crystals are incorporated into aggregates composed otherwise of spherically shaped amorphous Fe-Mg olivines and pyroxenes. We show that the optical properties of a porous aggregate is not linear combination of its monomers, so aggregates need to be computed. We discuss the consequence of lowering comet crystal mass fractions by modeling IR SEDs with aggregates with crystals, and the implications for radial transport models of our protoplanetary disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papadimitroulas, P; Kagadis, GC; Loudos, G
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniquesmore » were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the results with clinical estimated doses.« less
Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.
Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia
2018-06-21
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Improvements in approaches to forecasting and evaluation techniques
NASA Astrophysics Data System (ADS)
Weatherhead, Elizabeth
2014-05-01
The US is embarking on an experiment to make significant and sustained improvements in weather forecasting. The effort stems from a series of community conversations that recognized the rapid advancements in observations, modeling and computing techniques in the academic, governmental and private sectors. The new directions and initial efforts will be summarized, including information on possibilities for international collaboration. Most new projects are scheduled to start in the last half of 2014. Several advancements include ensemble forecasting with global models, and new sharing of computing resources. Newly developed techniques for evaluating weather forecast models will be presented in detail. The approaches use statistical techniques that incorporate pair-wise comparisons of forecasts with observations and account for daily auto-correlation to assess appropriate uncertainty in forecast changes. Some of the new projects allow for international collaboration, particularly on the research components of the projects.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732