Sample records for model assumptions computational

  1. Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    PubMed Central

    Schroll, Henning; Hamker, Fred H.

    2013-01-01

    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002

  2. The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.

    PubMed

    Ene, Florentina; Delassus, Patrick; Morris, Liam

    2014-08-01

    The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.

  3. Comparison of 2D Finite Element Modeling Assumptions with Results From 3D Analysis for Composite Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.

    2004-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  4. Influence of 2D Finite Element Modeling Assumptions on Debonding Prediction for Composite Skin-stiffener Specimens Subjected to Tension and Bending

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.

  5. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    USDA-ARS?s Scientific Manuscript database

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  6. Design Considerations for Large Computer Communication Networks,

    DTIC Science & Technology

    1976-04-01

    particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered

  7. Variability of hemodynamic parameters using the common viscosity assumption in a computational fluid dynamics analysis of intracranial aneurysms.

    PubMed

    Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi

    2017-01-01

    In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.

  8. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  9. Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane

    This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less

  10. Computation in generalised probabilisitic theories

    NASA Astrophysics Data System (ADS)

    Lee, Ciarán M.; Barrett, Jonathan

    2015-08-01

    From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.

  11. A Test of the Validity of Inviscid Wall-Modeled LES

    NASA Astrophysics Data System (ADS)

    Redman, Andrew; Craft, Kyle; Aikens, Kurt

    2015-11-01

    Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  12. Lagrangian methods for blood damage estimation in cardiovascular devices--How numerical implementation affects the results.

    PubMed

    Marom, Gil; Bluestein, Danny

    2016-01-01

    This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.

  13. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  15. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  16. Initial Comparison of Single Cylinder Stirling Engine Computer Model Predictions with Test Results

    NASA Technical Reports Server (NTRS)

    Tew, R. C., Jr.; Thieme, L. G.; Miao, D.

    1979-01-01

    A Stirling engine digital computer model developed at NASA Lewis Research Center was configured to predict the performance of the GPU-3 single-cylinder rhombic drive engine. Revisions to the basic equations and assumptions are discussed. Model predictions with the early results of the Lewis Research Center GPU-3 tests are compared.

  17. Lagrangian methods for blood damage estimation in cardiovascular devices - How numerical implementation affects the results

    PubMed Central

    Marom, Gil; Bluestein, Danny

    2016-01-01

    Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833

  18. Analysis of the impact of error detection on computer performance

    NASA Technical Reports Server (NTRS)

    Shin, K. C.; Lee, Y. H.

    1983-01-01

    Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.

  19. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  20. Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions revisited and found inadequate

    NASA Astrophysics Data System (ADS)

    Coon, Max; Kwok, Ron; Levy, Gad; Pruis, Matthew; Schreyer, Howard; Sulsky, Deborah

    2007-11-01

    This paper revisits the Arctic Ice Dynamics Joint Experiment (AIDJEX) assumptions about pack ice behavior with an eye to modeling sea ice dynamics. The AIDJEX assumptions were that (1) enough leads were present in a 100 km by 100 km region to make the ice isotropic on that scale; (2) the ice had no tensile strength; and (3) the ice behavior could be approximated by an isotropic yield surface. These assumptions were made during the development of the AIDJEX model in the 1970s, and are now found inadequate. The assumptions were made in part because of insufficient large-scale (10 km) deformation and stress data, and in part because of computer capability limitations. Upon reviewing deformation and stress data, it is clear that a model including deformation on discontinuities and an anisotropic failure surface with tension would better describe the behavior of pack ice. A model based on these assumptions is needed to represent the deformation and stress in pack ice on scales from 10 to 100 km, and would need to explicitly resolve discontinuities. Such a model would require a different class of metrics to validate discontinuities against observations.

  1. Mechanics of airflow in the human nasal airways.

    PubMed

    Doorly, D J; Taylor, D J; Schroter, R C

    2008-11-30

    The mechanics of airflow in the human nasal airways is reviewed, drawing on the findings of experimental and computational model studies. Modelling inevitably requires simplifications and assumptions, particularly given the complexity of the nasal airways. The processes entailed in modelling the nasal airways (from defining the model, to its production and, finally, validating the results) is critically examined, both for physical models and for computational simulations. Uncertainty still surrounds the appropriateness of the various assumptions made in modelling, particularly with regard to the nature of flow. New results are presented in which high-speed particle image velocimetry (PIV) and direct numerical simulation are applied to investigate the development of flow instability in the nasal cavity. These illustrate some of the improved capabilities afforded by technological developments for future model studies. The need for further improvements in characterising airway geometry and flow together with promising new methods are briefly discussed.

  2. ASP-G: an ASP-based method for finding attractors in genetic regulatory networks

    PubMed Central

    Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine

    2014-01-01

    Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722

  3. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  9. Adapting Instruction to Individual Learner Differences: A Research Paradigm for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Mills, Steven C.; Ragan, Tillman J.

    This paper examines a research paradigm that is particularly suited to experimentation-related computer-based instruction and integrated learning systems. The main assumption of the model is that one of the most powerful capabilities of computer-based instruction, and specifically of integrated learning systems, is the capacity to adapt…

  10. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  11. Automated analysis in generic groups

    NASA Astrophysics Data System (ADS)

    Fagerholm, Edvard

    This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.

  12. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  13. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  15. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. Free Wake Techniques for Rotor Aerodynamic Analylis. Volume 2: Vortex Sheet Models

    NASA Technical Reports Server (NTRS)

    Tanuwidjaja, A.

    1982-01-01

    Results of computations are presented using vortex sheets to model the wake and test the sensitivity of the solutions to various assumptions used in the development of the models. The complete codings are included.

  17. Modeling Endovascular Coils as Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.

    2016-12-01

    Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.

  18. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  19. Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.

    PubMed

    Henry, R; Tiselj, I; Snoj, L

    2015-03-01

    New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  1. Manual of phosphoric acid fuel cell stack three-dimensional model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    A detailed distributed mathematical model of phosphoric acid fuel cell stack have been developed, with the FORTRAN computer program, for analyzing the temperature distribution in the stack and the associated current density distribution on the cell plates. Energy, mass, and electrochemical analyses in the stack were combined to develop the model. Several reasonable assumptions were made to solve this mathematical model by means of the finite differences numerical method.

  2. Notes from 1999 on computational algorithm of the Local Wave-Vector (LWV) model for the dynamical evolution of the second-rank velocity correlation tensor starting from the mean-flow-coupled Navier-Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemach, Charles; Kurien, Susan

    These notes present an account of the Local Wave Vector (LWV) model of a turbulent flow defined throughout physical space. The previously-developed Local Wave Number (LWN) model is taken as a point of departure. Some general properties of turbulent fields and appropriate notation are given first. The LWV model is presently restricted to incompressible flows and the incompressibility assumption is introduced at an early point in the discussion. The assumption that the turbulence is homogeneous is also introduced early on. This assumption can be relaxed by generalizing the space diffusion terms of LWN, but the present discussion is focused onmore » a modeling of homogeneous turbulence.« less

  3. Multiscale Fiber Kinking: Computational Micromechanics and a Mesoscale Continuum Damage Mechanics Models

    NASA Technical Reports Server (NTRS)

    Herraez, Miguel; Bergan, Andrew C.; Gonzalez, Carlos; Lopes, Claudio S.

    2017-01-01

    In this work, the fiber kinking phenomenon, which is known as the failure mechanism that takes place when a fiber reinforced polymer is loaded under longitudinal compression, is studied. A computational micromechanics model is employed to interrogate the assumptions of a recently developed mesoscale continuum damage mechanics (CDM) model for fiber kinking based on the deformation gradient decomposition (DGD) and the LaRC04 failure criteria.

  4. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  5. Finite Correlation Length Implies Efficient Preparation of Quantum Thermal States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Kastoryano, Michael J.

    2018-05-01

    Preparing quantum thermal states on a quantum computer is in general a difficult task. We provide a procedure to prepare a thermal state on a quantum computer with a logarithmic depth circuit of local quantum channels assuming that the thermal state correlations satisfy the following two properties: (i) the correlations between two regions are exponentially decaying in the distance between the regions, and (ii) the thermal state is an approximate Markov state for shielded regions. We require both properties to hold for the thermal state of the Hamiltonian on any induced subgraph of the original lattice. Assumption (ii) is satisfied for all commuting Gibbs states, while assumption (i) is satisfied for every model above a critical temperature. Both assumptions are satisfied in one spatial dimension. Moreover, both assumptions are expected to hold above the thermal phase transition for models without any topological order at finite temperature. As a building block, we show that exponential decay of correlation (for thermal states of Hamiltonians on all induced subgraphs) is sufficient to efficiently estimate the expectation value of a local observable. Our proof uses quantum belief propagation, a recent strengthening of strong sub-additivity, and naturally breaks down for states with topological order.

  6. Stochastic analysis of surface roughness models in quantum wires

    NASA Astrophysics Data System (ADS)

    Nedjalkov, Mihail; Ellinghaus, Paul; Weinbub, Josef; Sadi, Toufik; Asenov, Asen; Dimov, Ivan; Selberherr, Siegfried

    2018-07-01

    We present a signed particle computational approach for the Wigner transport model and use it to analyze the electron state dynamics in quantum wires focusing on the effect of surface roughness. Usually surface roughness is considered as a scattering model, accounted for by the Fermi Golden Rule, which relies on approximations like statistical averaging and in the case of quantum wires incorporates quantum corrections based on the mode space approach. We provide a novel computational approach to enable physical analysis of these assumptions in terms of phase space and particles. Utilized is the signed particles model of Wigner evolution, which, besides providing a full quantum description of the electron dynamics, enables intuitive insights into the processes of tunneling, which govern the physical evolution. It is shown that the basic assumptions of the quantum-corrected scattering model correspond to the quantum behavior of the electron system. Of particular importance is the distribution of the density: Due to the quantum confinement, electrons are kept away from the walls, which is in contrast to the classical scattering model. Further quantum effects are retardation of the electron dynamics and quantum reflection. Far from equilibrium the assumption of homogeneous conditions along the wire breaks even in the case of ideal wire walls.

  7. Statistical Mechanical Derivation of Jarzynski's Identity for Thermostated Non-Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Cuendet, Michel A.

    2006-03-01

    The recent Jarzynski identity (JI) relates thermodynamic free energy differences to nonequilibrium work averages. Several proofs of the JI have been provided on the thermodynamic level. They rely on assumptions such as equivalence of ensembles in the thermodynamic limit or weakly coupled infinite heat baths. However, the JI is widely applied to NVT computer simulations involving finite numbers of particles, whose equations of motion are strongly coupled to a few extra degrees of freedom modeling a thermostat. In this case, the above assumptions are no longer valid. We propose a statistical mechanical approach to the JI solely based on the specific equations of motion, without any further assumption. We provide a detailed derivation for the non-Hamiltonian Nosé-Hoover dynamics, which is routinely used in computer simulations to produce canonical sampling.

  8. Calculation of effective transport properties of partially saturated gas diffusion layers

    NASA Astrophysics Data System (ADS)

    Bednarek, Tomasz; Tsotridis, Georgios

    2017-02-01

    A large number of currently available Computational Fluid Dynamics numerical models of Polymer Electrolyte Membrane Fuel Cells (PEMFC) are based on the assumption that porous structures are mainly considered as thin and homogenous layers, hence the mass transport equations in structures such as Gas Diffusion Layers (GDL) are usually modelled according to the Darcy assumptions. Application of homogenous models implies that the effects of porous structures are taken into consideration via the effective transport properties of porosity, tortuosity, permeability (or flow resistance), diffusivity, electric and thermal conductivity. Therefore, reliable values of those effective properties of GDL play a significant role for PEMFC modelling when employing Computational Fluid Dynamics, since these parameters are required as input values for performing the numerical calculations. The objective of the current study is to calculate the effective transport properties of GDL, namely gas permeability, diffusivity and thermal conductivity, as a function of liquid water saturation by using the Lattice-Boltzmann approach. The study proposes a method of uniform water impregnation of the GDL based on the "Fine-Mist" assumption by taking into account the surface tension of water droplets and the actual shape of GDL pores.

  9. FARSITE: Fire Area Simulator-model development and evaluation

    Treesearch

    Mark A. Finney

    1998-01-01

    A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.

  10. Formal specification and verification of a fault-masking and transient-recovery model for digital flight-control systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1991-01-01

    The formal specification and mechanically checked verification for a model of fault-masking and transient-recovery among the replicated computers of digital flight-control systems are presented. The verification establishes, subject to certain carefully stated assumptions, that faults among the component computers are masked so that commands sent to the actuators are the same as those that would be sent by a single computer that suffers no failures.

  11. MIX: a computer program to evaluate interaction between chemicals

    Treesearch

    Jacqueline L. Robertson; Kimberly C. Smith

    1989-01-01

    A computer program, MIX, was designed to identify pairs of chemicals whose interaction results in a response that departs significantly from the model predicated on the assumption of independent, uncorrelated joint action. This report describes the MIX program, its statistical basis, and instructions for its use.

  12. Algorithms for the Computation of Debris Risk

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.

  13. Algorithms for the Computation of Debris Risks

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.

  14. Model description document for a computer program for the emulation/simulation of a space station environmental control and life support system (ESCM)

    NASA Technical Reports Server (NTRS)

    Yanosy, James L.

    1988-01-01

    Emulation/Simulation Computer Model (ESCM) computes the transient performance of a Space Station air revitalization subsystem with carbon dioxide removal provided by a solid amine water desorbed subsystem called SAWD. This manual describes the mathematical modeling and equations used in the ESCM. For the system as a whole and for each individual component, the fundamental physical and chemical laws which govern their operations are presented. Assumptions are stated, and when necessary, data is presented to support empirically developed relationships.

  15. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  16. Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models

    DOE PAGES

    Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...

    2016-05-01

    In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less

  17. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  18. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less

  19. Impact of one-layer assumption on diffuse reflectance spectroscopy of skin

    NASA Astrophysics Data System (ADS)

    Hennessy, Ricky; Markey, Mia K.; Tunnell, James W.

    2015-02-01

    Diffuse reflectance spectroscopy (DRS) can be used to noninvasively measure skin properties. To extract skin properties from DRS spectra, you need a model that relates the reflectance to the tissue properties. Most models are based on the assumption that skin is homogenous. In reality, skin is composed of multiple layers, and the homogeneity assumption can lead to errors. In this study, we analyze the errors caused by the homogeneity assumption. This is accomplished by creating realistic skin spectra using a computational model, then extracting properties from those spectra using a one-layer model. The extracted parameters are then compared to the parameters used to create the modeled spectra. We used a wavelength range of 400 to 750 nm and a source detector separation of 250 μm. Our results show that use of a one-layer skin model causes underestimation of hemoglobin concentration [Hb] and melanin concentration [mel]. Additionally, the magnitude of the error is dependent on epidermal thickness. The one-layer assumption also causes [Hb] and [mel] to be correlated. Oxygen saturation is overestimated when it is below 50% and underestimated when it is above 50%. We also found that the vessel radius factor used to account for pigment packaging is correlated with epidermal thickness.

  20. A COMPUTATIONALLY EFFICIENT HYBRID APPROACH FOR DYNAMIC GAS/AEROSOL TRANSFER IN AIR QUALITY MODELS. (R826371C005)

    EPA Science Inventory

    Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...

  1. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  2. BEHAVE: fire behavior prediction and fuel modeling system-BURN Subsystem, part 1

    Treesearch

    Patricia L. Andrews

    1986-01-01

    Describes BURN Subsystem, Part 1, the operational fire behavior prediction subsystem of the BEHAVE fire behavior prediction and fuel modeling system. The manual covers operation of the computer program, assumptions of the mathematical models used in the calculations, and application of the predictions.

  3. Spreading dynamics on complex networks: a general stochastic approach.

    PubMed

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  4. Documentation of a computer program to simulate stream-aquifer relations using a modular, finite-difference, ground-water flow model

    USGS Publications Warehouse

    Prudic, David E.

    1989-01-01

    Computer models are widely used to simulate groundwater flow for evaluating and managing the groundwater resource of many aquifers, but few are designed to also account for surface flow in streams. A computer program was written for use in the US Geological Survey modular finite difference groundwater flow model to account for the amount of flow in streams and to simulate the interaction between surface streams and groundwater. The new program is called the Streamflow-Routing Package. The Streamflow-Routing Package is not a true surface water flow model, but rather is an accounting program that tracks the flow in one or more streams which interact with groundwater. The program limits the amount of groundwater recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The program also permits diversions from streams. The groundwater flow model with the Streamflow-Routing Package has an advantage over the analytical solution in simulating the interaction between aquifer and stream because it can be used to simulate complex systems that cannot be readily solved analytically. The Streamflow-Routing Package does not include a time function for streamflow but rather streamflow entering the modeled area is assumed to be instantly available to downstream reaches during each time period. This assumption is generally reasonable because of the relatively slow rate of groundwater flow. Another assumption is that leakage between streams and aquifers is instantaneous. This assumption may not be reasonable if the streams and aquifers are separated by a thick unsaturated zone. Documentation of the Streamflow-Routing Package includes data input instructions; flow charts, narratives, and listings of the computer program for each of four modules; and input data sets and printed results for two test problems, and one example problem. (Lantz-PTT)

  5. Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters

    PubMed Central

    Wozniak, Christopher E.; Hughes, Kelly T.

    2008-01-01

    Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950

  6. A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics

    NASA Astrophysics Data System (ADS)

    Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer

    2017-12-01

    Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.

  7. Modeling of Photoionized Plasmas

    NASA Technical Reports Server (NTRS)

    Kallman, Timothy R.

    2010-01-01

    In this paper I review the motivation and current status of modeling of plasmas exposed to strong radiation fields, as it applies to the study of cosmic X-ray sources. This includes some of the astrophysical issues which can be addressed, the ingredients for the models, the current computational tools, the limitations imposed by currently available atomic data, and the validity of some of the standard assumptions. I will also discuss ideas for the future: challenges associated with future missions, opportunities presented by improved computers, and goals for atomic data collection.

  8. Ethical issues in engineering models: an operations researcher's reflections.

    PubMed

    Kleijnen, J

    2011-09-01

    This article starts with an overview of the author's personal involvement--as an Operations Research consultant--in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers, the clients, and the public at large. The article further presents an overview of codes of ethics in a variety of disciples. It discusses the role of mathematical models, focusing on the validation of these models' assumptions. Documentation of these model assumptions needs special attention. Some ethical norms and values may be quantified through the model's multiple performance measures, which might be optimized. The uncertainty about the validity of the model leads to risk or uncertainty analysis and to a search for robust models. Ethical questions may be pressing in military models, including war games. However, computer games and the related experimental economics may also provide a special tool to study ethical issues. Finally, the article briefly discusses whistleblowing. Its many references to publications and websites enable further study of ethical issues in modeling.

  9. Finite area combustor theoretical rocket performance

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Mcbride, Bonnie J.

    1988-01-01

    Previous to this report, the computer program of NASA SP-273 and NASA TM-86885 was capable of calculating theoretical rocket performance based only on the assumption of an infinite area combustion chamber (IAC). An option was added to this program which now also permits the calculation of rocket performance based on the assumption of a finite area combustion chamber (FAC). In the FAC model, the combustion process in the cylindrical chamber is assumed to be adiabatic, but nonisentropic. This results in a stagnation pressure drop from the injector face to the end of the chamber and a lower calculated performance for the FAC model than the IAC model.

  10. Fundamentals and Recent Developments in Approximate Bayesian Computation

    PubMed Central

    Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka

    2017-01-01

    Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922

  11. A streamline splitting pore-network approach for computationally inexpensive and accurate simulation of transport in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehmani, Yashar; Oostrom, Martinus; Balhoff, Matthew

    2014-03-20

    Several approaches have been developed in the literature for solving flow and transport at the pore-scale. Some authors use a direct modeling approach where the fundamental flow and transport equations are solved on the actual pore-space geometry. Such direct modeling, while very accurate, comes at a great computational cost. Network models are computationally more efficient because the pore-space morphology is approximated. Typically, a mixed cell method (MCM) is employed for solving the flow and transport system which assumes pore-level perfect mixing. This assumption is invalid at moderate to high Peclet regimes. In this work, a novel Eulerian perspective on modelingmore » flow and transport at the pore-scale is developed. The new streamline splitting method (SSM) allows for circumventing the pore-level perfect mixing assumption, while maintaining the computational efficiency of pore-network models. SSM was verified with direct simulations and excellent matches were obtained against micromodel experiments across a wide range of pore-structure and fluid-flow parameters. The increase in the computational cost from MCM to SSM is shown to be minimal, while the accuracy of SSM is much higher than that of MCM and comparable to direct modeling approaches. Therefore, SSM can be regarded as an appropriate balance between incorporating detailed physics and controlling computational cost. The truly predictive capability of the model allows for the study of pore-level interactions of fluid flow and transport in different porous materials. In this paper, we apply SSM and MCM to study the effects of pore-level mixing on transverse dispersion in 3D disordered granular media.« less

  12. The incompressibility assumption in computational simulations of nasal airflow.

    PubMed

    Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel

    2017-06-01

    Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.

  13. A survey of numerical models for wind prediction

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.

    1980-01-01

    A literature review is presented of the work done in the numerical modeling of wind flows. Pertinent computational techniques are described, as well as the necessary assumptions used to simplify the governing equations. A steady state model is outlined, based on the data obtained at the Deep Space Communications complex at Goldstone, California.

  14. Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  15. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  16. Models of Individual Trajectories in Computer-Assisted Instruction for Deaf Students. Technical Report No. 214.

    ERIC Educational Resources Information Center

    Suppes, P.; And Others

    From some simple and schematic assumptions about information processing, a stochastic differential equation is derived for the motion of a student through a computer-assisted elementary mathematics curriculum. The mathematics strands curriculum of the Institute for Mathematical Studies in the Social Sciences is used to test: (1) the theory and (2)…

  17. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  18. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  19. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  20. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  1. Models for the propensity score that contemplate the positivity assumption and their application to missing data and causality.

    PubMed

    Molina, J; Sued, M; Valdora, M

    2018-06-05

    Generalized linear models are often assumed to fit propensity scores, which are used to compute inverse probability weighted (IPW) estimators. To derive the asymptotic properties of IPW estimators, the propensity score is supposed to be bounded away from zero. This condition is known in the literature as strict positivity (or positivity assumption), and, in practice, when it does not hold, IPW estimators are very unstable and have a large variability. Although strict positivity is often assumed, it is not upheld when some of the covariates are unbounded. In real data sets, a data-generating process that violates the positivity assumption may lead to wrong inference because of the inaccuracy in the estimations. In this work, we attempt to conciliate between the strict positivity condition and the theory of generalized linear models by incorporating an extra parameter, which results in an explicit lower bound for the propensity score. An additional parameter is added to fulfil the overlap assumption in the causal framework. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Foundations for computer simulation of a low pressure oil flooded single screw air compressor

    NASA Astrophysics Data System (ADS)

    Bein, T. W.

    1981-12-01

    The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.

  3. How do rigid-lid assumption affect LES simulation results at high Reynolds flows?

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration

    2017-11-01

    This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.

  4. Calculation of Macrosegregation in an Ingot

    NASA Technical Reports Server (NTRS)

    Poirier, D. R.; Maples, A. L.

    1986-01-01

    Report describes both two-dimensional theoretical model of macrosegregation (separating into regions of discrete composition) in solidification of binary alloy in chilled rectangular mold and interactive computer program embodying model. Model evolved from previous ones limited to calculating effects of interdendritic fluid flow on final macrosegregation for given input temperature field under assumption of no fluid in bulk melt.

  5. A CBI Model for the Design of CAI Software by Teachers/Nonprogrammers.

    ERIC Educational Resources Information Center

    Tessmer, Martin; Jonassen, David H.

    This paper describes a design model presented in workbook form which is intended to facilitate computer-assisted instruction (CAI) software design by teachers who do not have programming experience. Presentation of the model is preceded by a number of assumptions that underlie the instructional content and methods of the textbook. It is argued…

  6. Boltzmann's "H"-Theorem and the Assumption of Molecular Chaos

    ERIC Educational Resources Information Center

    Boozer, A. D.

    2011-01-01

    We describe a simple dynamical model of a one-dimensional ideal gas and use computer simulations of the model to illustrate two fundamental results of kinetic theory: the Boltzmann transport equation and the Boltzmann "H"-theorem. Although the model is time-reversal invariant, both results predict that the behaviour of the gas is time-asymmetric.…

  7. Fishery stock assessment of Kiddi shrimp ( Parapenaeopsis stylifera) in the Northern Arabian Sea Coast of Pakistan by using surplus production models

    NASA Astrophysics Data System (ADS)

    Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain

    2017-07-01

    Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.

  8. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  9. Evolution of product lifespan and implications for environmental assessment and management: a case study of personal computers in higher education.

    PubMed

    Babbitt, Callie W; Kahhat, Ramzy; Williams, Eric; Babbitt, Gregory A

    2009-07-01

    Product lifespan is a fundamental variable in understanding the environmental impacts associated with the life cycle of products. Existing life cycle and materials flow studies of products, almost without exception, consider lifespan to be constant over time. To determine the validity of this assumption, this study provides an empirical documentation of the long-term evolution of personal computer lifespan, using a major U.S. university as a case study. Results indicate that over the period 1985-2000, computer lifespan (purchase to "disposal") decreased steadily from a mean of 10.7 years in 1985 to 5.5 years in 2000. The distribution of lifespan also evolved, becoming narrower over time. Overall, however, lifespan distribution was broader than normally considered in life cycle assessments or materials flow forecasts of electronic waste management for policy. We argue that these results suggest that at least for computers, the assumption of constant lifespan is problematic and that it is important to work toward understanding the dynamics of use patterns. We modify an age-structured model of population dynamics from biology as a modeling approach to describe product life cycles. Lastly, the purchase share and generation of obsolete computers from the higher education sector is estimated using different scenarios for the dynamics of product lifespan.

  10. Model documentation Renewable Fuels Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-01-01

    This report documents the objectives, analaytical approach and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1996 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described.

  11. Upon Accounting for the Impact of Isoenzyme Loss, Gene Deletion Costs Anticorrelate with Their Evolutionary Rates.

    PubMed

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel

    2017-01-01

    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.

  12. Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS

    PubMed Central

    Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.

    2010-01-01

    Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741

  13. Assessing the performance of a computer-based policy model of HIV and AIDS.

    PubMed

    Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J

    2010-09-09

    Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.

  14. Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions

    DTIC Science & Technology

    2013-01-01

    assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need

  15. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  16. Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions

    ERIC Educational Resources Information Center

    Vuolo, Mike

    2017-01-01

    Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…

  17. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  18. Camera traps and mark-resight models: The value of ancillary data for evaluating assumptions

    USGS Publications Warehouse

    Parsons, Arielle W.; Simons, Theodore R.; Pollock, Kenneth H.; Stoskopf, Michael K.; Stocking, Jessica J.; O'Connell, Allan F.

    2015-01-01

    Unbiased estimators of abundance and density are fundamental to the study of animal ecology and critical for making sound management decisions. Capture–recapture models are generally considered the most robust approach for estimating these parameters but rely on a number of assumptions that are often violated but rarely validated. Mark-resight models, a form of capture–recapture, are well suited for use with noninvasive sampling methods and allow for a number of assumptions to be relaxed. We used ancillary data from continuous video and radio telemetry to evaluate the assumptions of mark-resight models for abundance estimation on a barrier island raccoon (Procyon lotor) population using camera traps. Our island study site was geographically closed, allowing us to estimate real survival and in situ recruitment in addition to population size. We found several sources of bias due to heterogeneity of capture probabilities in our study, including camera placement, animal movement, island physiography, and animal behavior. Almost all sources of heterogeneity could be accounted for using the sophisticated mark-resight models developed by McClintock et al. (2009b) and this model generated estimates similar to a spatially explicit mark-resight model previously developed for this population during our study. Spatially explicit capture–recapture models have become an important tool in ecology and confer a number of advantages; however, non-spatial models that account for inherent individual heterogeneity may perform nearly as well, especially where immigration and emigration are limited. Non-spatial models are computationally less demanding, do not make implicit assumptions related to the isotropy of home ranges, and can provide insights with respect to the biological traits of the local population.

  19. F-14 modeling study

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Baron, S.

    1984-01-01

    Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.

  20. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  1. Rapid computational identification of the targets of protein kinase inhibitors.

    PubMed

    Rockey, William M; Elcock, Adrian H

    2005-06-16

    We describe a method for rapidly computing the relative affinities of an inhibitor for all individual members of a family of homologous receptors. The approach, implemented in a new program, SCR, models inhibitor-receptor interactions in full atomic detail with an empirical energy function and includes an explicit account of flexibility in homology-modeled receptors through sampling of libraries of side chain rotamers. SCR's general utility was demonstrated by application to seven different protein kinase inhibitors: for each inhibitor, relative binding affinities with panels of approximately 20 protein kinases were computed and compared with experimental data. For five of the inhibitors (SB203580, purvalanol B, imatinib, H89, and hymenialdisine), SCR provided excellent reproduction of the experimental trends and, importantly, was capable of identifying the targets of inhibitors even when they belonged to different kinase families. The method's performance in a predictive setting was demonstrated by performing separate training and testing applications, and its key assumptions were tested by comparison with a number of alternative approaches employing the ligand-docking program AutoDock (Morris et al. J. Comput. Chem. 1998, 19, 1639-1662). These comparison tests included using AutoDock in nondocking and docking modes and performing energy minimizations of inhibitor-kinase complexes with the molecular mechanics code GROMACS (Berendsen et al. Comput. Phys. Commun. 1995, 91, 43-56). It was found that a surprisingly important aspect of SCR's approach is its assumption that the inhibitor be modeled in the same orientation for each kinase: although this assumption is in some respects unrealistic, calculations that used apparently more realistic approaches produced clearly inferior results. Finally, as a large-scale application of the method, SB203580, purvalanol B, and imatinib were screened against an almost full complement of 493 human protein kinases using SCR in order to identify potential new targets; the predicted targets of SB203580 were compared with those identified in recent proteomics-based experiments. These kinome-wide screens, performed within a day on a small cluster of PCs, indicate that explicit computation of inhibitor-receptor binding affinities has the potential to promote rapid discovery of new therapeutic targets for existing inhibitors.

  2. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    PubMed

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  3. EVOLUTION OF THE MAGNETIC FIELD LINE DIFFUSION COEFFICIENT AND NON-GAUSSIAN STATISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snodin, A. P.; Ruffolo, D.; Matthaeus, W. H.

    The magnetic field line random walk (FLRW) plays an important role in the transport of energy and particles in turbulent plasmas. For magnetic fluctuations that are transverse or almost transverse to a large-scale mean magnetic field, theories describing the FLRW usually predict asymptotic diffusion of magnetic field lines perpendicular to the mean field. Such theories often depend on the assumption that one can relate the Lagrangian and Eulerian statistics of the magnetic field via Corrsin’s hypothesis, and additionally take the distribution of magnetic field line displacements to be Gaussian. Here we take an ordinary differential equation (ODE) model with thesemore » underlying assumptions and test how well it describes the evolution of the magnetic field line diffusion coefficient in 2D+slab magnetic turbulence, by comparisons to computer simulations that do not involve such assumptions. In addition, we directly test the accuracy of the Corrsin approximation to the Lagrangian correlation. Over much of the studied parameter space we find that the ODE model is in fairly good agreement with computer simulations, in terms of both the evolution and asymptotic values of the diffusion coefficient. When there is poor agreement, we show that this can be largely attributed to the failure of Corrsin’s hypothesis rather than the assumption of Gaussian statistics of field line displacements. The degree of non-Gaussianity, which we measure in terms of the kurtosis, appears to be an indicator of how well Corrsin’s approximation works.« less

  4. Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.

  5. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  6. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  7. Opening new institutional spaces for grappling with uncertainty: A constructivist perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duncan, Ronlyn, E-mail: Ronlyn.Duncan@lincoln.ac.nz

    In the context of an increasing reliance on predictive computer simulation models to calculate potential project impacts, it has become common practice in impact assessment (IA) to call on proponents to disclose uncertainties in assumptions and conclusions assembled in support of a development project. Understandably, it is assumed that such disclosures lead to greater scrutiny and better policy decisions. This paper questions this assumption. Drawing on constructivist theories of knowledge and an analysis of the role of narratives in managing uncertainty, I argue that the disclosure of uncertainty can obscure as much as it reveals about the impacts of amore » development project. It is proposed that the opening up of institutional spaces that can facilitate the negotiation and deliberation of foundational assumptions and parameters that feed into predictive models could engender greater legitimacy and credibility for IA outcomes. - Highlights: Black-Right-Pointing-Pointer A reliance on supposedly objective disclosure is unreliable in the predictive model context in which IA is now embedded. Black-Right-Pointing-Pointer A reliance on disclosure runs the risk of reductionism and leaves unexamined the social-interactive aspects of uncertainty. Black-Right-Pointing-Pointer Opening new institutional spaces could facilitate deliberation on foundational predictive model assumptions.« less

  8. Numerical modeling of divergent detonation wave

    NASA Astrophysics Data System (ADS)

    Li, Zhiwei; Liu, Bangdi

    1987-11-01

    The indefinite nature of divergent detonations under the assumption of instantaneous stable detonation is described. In the numerical modeling method for divergent detonation, the artificial cohesiveness was improved and the Cochran reaction rate and the JWL equations of state were used to describe the ignition process of the explosion. Several typical divergent detonation problems were computed obtaining rather satisfying results.

  9. Network model and short circuit program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Assumptions made and techniques used in modeling the power network to the 480 volt level are discussed. Basic computational techniques used in the short circuit program are described along with a flow diagram of the program and operational procedures. Procedures for incorporating network changes are included in this user's manual.

  10. Implementing vertex dynamics models of cell populations in biology within a consistent computational framework.

    PubMed

    Fletcher, Alexander G; Osborne, James M; Maini, Philip K; Gavaghan, David J

    2013-11-01

    The dynamic behaviour of epithelial cell sheets plays a central role during development, growth, disease and wound healing. These processes occur as a result of cell adhesion, migration, division, differentiation and death, and involve multiple processes acting at the cellular and molecular level. Computational models offer a useful means by which to investigate and test hypotheses about these processes, and have played a key role in the study of cell-cell interactions. However, the necessarily complex nature of such models means that it is difficult to make accurate comparison between different models, since it is often impossible to distinguish between differences in behaviour that are due to the underlying model assumptions, and those due to differences in the in silico implementation of the model. In this work, an approach is described for the implementation of vertex dynamics models, a discrete approach that represents each cell by a polygon (or polyhedron) whose vertices may move in response to forces. The implementation is undertaken in a consistent manner within a single open source computational framework, Chaste, which comprises fully tested, industrial-grade software that has been developed using an agile approach. This framework allows one to easily change assumptions regarding force generation and cell rearrangement processes within these models. The versatility and generality of this framework is illustrated using a number of biological examples. In each case we provide full details of all technical aspects of our model implementations, and in some cases provide extensions to make the models more generally applicable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  12. Principles and Foundations for Fractionated Networked Cyber-Physical Systems

    DTIC Science & Technology

    2012-07-13

    spectrum between autonomy to cooperation. Our distributed comput- ing model is based on distributed knowledge sharing, and makes very few assumptions but...over the computation without the need for explicit migration. Randomization techniques will make sure that enough di- versity is maintained to allow...small UAV testbed consisting of 10 inex- pensive quadcopters at SRI. Hard ware-wise, we added heat sinks to mitigate the impact of additional heat that

  13. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    PubMed

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  14. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  15. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  16. Gene network reconstruction from transcriptional dynamics under kinetic model uncertainty: a case for the second derivative

    PubMed Central

    Bickel, David R.; Montazeri, Zahra; Hsieh, Pei-Chun; Beatty, Mary; Lawit, Shai J.; Bate, Nicholas J.

    2009-01-01

    Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Contact: dbickel@uottawa.ca Supplementary information: http://www.davidbickel.com PMID:19218351

  17. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, E.P.; Johnson, K.I.; Simonen, F.A.

    The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less

  19. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    NASA Astrophysics Data System (ADS)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  20. Direct coal liquefaction baseline design and system analysis. Quarterly report, January--March 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  1. Direct coal liquefaction baseline design and system analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-04-01

    The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on two stage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlyingmore » assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.« less

  2. Modeling of power transmission and stress grading for corona protection

    NASA Astrophysics Data System (ADS)

    Zohdi, T. I.; Abali, B. E.

    2017-11-01

    Electrical high voltage (HV) machines are prone to corona discharges leading to power losses as well as damage of the insulating layer. Many different techniques are applied as corona protection and computational methods aid to select the best design. In this paper we develop a reduced-order model in 1D estimating electric field and temperature distribution of a conductor wrapped with different layers, as usual for HV-machines. Many assumptions and simplifications are undertaken for this 1D model, therefore, we compare its results to a direct numerical simulation in 3D quantitatively. Both models are transient and nonlinear, giving a possibility to quickly estimate in 1D or fully compute in 3D by a computational cost. Such tools enable understanding, evaluation, and optimization of corona shielding systems for multilayered coils.

  3. Effects of fish movement assumptions on the design of a marine protected area to protect an overfished stock.

    PubMed

    Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D

    2017-01-01

    Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.

  4. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  5. Numerical simulations of SHPB experiments for the dynamic compressive strength and failure of ceramics

    NASA Astrophysics Data System (ADS)

    Anderson, Charles E., Jr.; O'Donoghue, Padraic E.; Lankford, James; Walker, James D.

    1992-06-01

    Complementary to a study of the compressive strength of ceramic as a function of strain rate and confinement, numerical simulations of the split-Hopkinson pressure bar (SHPB) experiments have been performed using the two-dimensional wave propagation computer program HEMP. The numerical effort had two main thrusts. Firstly, the interpretation of the experimental data relies on several assumptions. The numerical simulations were used to investigate the validity of these assumptions. The second part of the effort focused on computing the idealized constitutive response of a ceramic within the SHPB experiment. These numerical results were then compared against experimental data. Idealized models examined included a perfectly elastic material, an elastic-perfectly plastic material, and an elastic material with failure. Post-failure material was modeled as having either no strength, or a strength proportional to the mean stress. The effects of confinement were also studied. Conclusions concerning the dynamic behavior of a ceramic up to and after failure are drawn from the numerical study.

  6. Changes in corticostriatal connectivity during reinforcement learning in humans.

    PubMed

    Horga, Guillermo; Maia, Tiago V; Marsh, Rachel; Hao, Xuejun; Xu, Dongrong; Duan, Yunsuo; Tau, Gregory Z; Graniello, Barbara; Wang, Zhishun; Kangarlu, Alayar; Martinez, Diana; Packard, Mark G; Peterson, Bradley S

    2015-02-01

    Many computational models assume that reinforcement learning relies on changes in synaptic efficacy between cortical regions representing stimuli and striatal regions involved in response selection, but this assumption has thus far lacked empirical support in humans. We recorded hemodynamic signals with fMRI while participants navigated a virtual maze to find hidden rewards. We fitted a reinforcement-learning algorithm to participants' choice behavior and evaluated the neural activity and the changes in functional connectivity related to trial-by-trial learning variables. Activity in the posterior putamen during choice periods increased progressively during learning. Furthermore, the functional connections between the sensorimotor cortex and the posterior putamen strengthened progressively as participants learned the task. These changes in corticostriatal connectivity differentiated participants who learned the task from those who did not. These findings provide a direct link between changes in corticostriatal connectivity and learning, thereby supporting a central assumption common to several computational models of reinforcement learning. © 2014 Wiley Periodicals, Inc.

  7. Response Matrix Monte Carlo for electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.

    1990-11-01

    A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less

  8. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    NASA Astrophysics Data System (ADS)

    Kaita, R.; Ignat, D. W.; Jardin, S. C.; Okabayashi, M.; Sun, Y. C.

    1996-02-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments.

  9. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  10. Motion and Stability of Saturated Soil Systems under Dynamic Loading.

    DTIC Science & Technology

    1985-04-04

    12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been

  11. Microphysical response of cloud droplets in a fluctuating updraft. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Harding, D. D.

    1977-01-01

    The effect of a fluctuating updraft upon a distribution of cloud droplets is examined. Computations are performed for fourteen vertical velocity patterns; each allows a closed parcel of cloud air to undergo downward as well as upward motion. Droplet solution and curvature effects are included. The classical equations for the growth rate of an individual droplet by vapor condensation relies on simplifying assumptions. Those assumptions are isolated and examined. A unique approach is presented in which all energy sources and sinks of a droplet may be considered and is termed the explicit model. It is speculated that the explicit model may enhance the growth of large droplets at greater heights. Such a model is beneficial to the studies of pollution scavenging and acid rain.

  12. Damping mathematical modelling and dynamic responses for FRP laminated composite plates with polymer matrix

    NASA Astrophysics Data System (ADS)

    Liu, Qimao

    2018-02-01

    This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.

  13. On the application of the Germano identity to subgrid-scale modeling

    NASA Technical Reports Server (NTRS)

    Ronchi, C.; Ypma, M.; Canuto, V. M.

    1992-01-01

    An identity proposed by Germano (1992) has been widely applied to several turbulent flows to dynamically compute rather than adjust the Smagorinsky coefficient. The assumptions under which the method has been used are discussed, and some conceptual difficulties in its current implementation are examined.

  14. CZAEM USER'S GUIDE: MODELING CAPTURE ZONES OF GROUND-WATER WELLS USING ANALYTIC ELEMENTS

    EPA Science Inventory

    The computer program CZAEM is designed for elementary capture zone analysis, and is based on the analytic element method. CZAEM is applicable to confined and/or unconfined low in shallow aquifers; the Dupuit-Forchheimer assumption is adopted. CZAEM supports the following analyt...

  15. Identifying fMRI Model Violations with Lagrange Multiplier Tests

    PubMed Central

    Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor

    2013-01-01

    The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665

  16. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE PAGES

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; ...

    2017-01-20

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  17. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  18. Memory interface simulator: A computer design aid

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  19. Population genetics inference for longitudinally-sampled mutants under strong selection.

    PubMed

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  20. Model-based analyses: Promises, pitfalls, and example applications to the study of cognitive control

    PubMed Central

    Mars, Rogier B.; Shea, Nicholas J.; Kolling, Nils; Rushworth, Matthew F. S.

    2011-01-01

    We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher defines a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. PMID:20437297

  1. Simulation of charge exchange plasma propagation near an ion thruster propelled spacecraft

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.; Winder, D. R.

    1981-01-01

    A model describing the charge exchange plasma and its propagation is discussed, along with a computer code based on the model. The geometry of an idealized spacecraft having an ion thruster is outlined, with attention given to the assumptions used in modeling the ion beam. Also presented is the distribution function describing charge exchange production. The barometric equation is used in relating the variation in plasma potential to the variation in plasma density. The numerical methods and approximations employed in the calculations are discussed, and comparisons are made between the computer simulation and experimental data. An analytical solution of a simple configuration is also used in verifying the model.

  2. Fluid-Structure Interaction Modeling of Intracranial Aneurysm Hemodynamics: Effects of Different Assumptions

    NASA Astrophysics Data System (ADS)

    Rajabzadeh Oghaz, Hamidreza; Damiano, Robert; Meng, Hui

    2015-11-01

    Intracranial aneurysms (IAs) are pathological outpouchings of cerebral vessels, the progression of which are mediated by complex interactions between the blood flow and vasculature. Image-based computational fluid dynamics (CFD) has been used for decades to investigate IA hemodynamics. However, the commonly adopted simplifying assumptions in CFD (e.g. rigid wall) compromise the simulation accuracy and mask the complex physics involved in IA progression and eventual rupture. Several groups have considered the wall compliance by using fluid-structure interaction (FSI) modeling. However, FSI simulation is highly sensitive to numerical assumptions (e.g. linear-elastic wall material, Newtonian fluid, initial vessel configuration, and constant pressure outlet), the effects of which are poorly understood. In this study, a comprehensive investigation of the sensitivity of FSI simulations in patient-specific IAs is investigated using a multi-stage approach with a varying level of complexity. We start with simulations incorporating several common simplifications: rigid wall, Newtonian fluid, and constant pressure at the outlets, and then we stepwise remove these simplifications until the most comprehensive FSI simulations. Hemodynamic parameters such as wall shear stress and oscillatory shear index are assessed and compared at each stage to better understand the sensitivity of in FSI simulations for IA to model assumptions. Supported by the National Institutes of Health (1R01 NS 091075-01).

  3. Assessing Omitted Confounder Bias in Multilevel Mediation Models.

    PubMed

    Tofighi, Davood; Kelley, Ken

    2016-01-01

    To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.

  4. A family of dynamic models for large-eddy simulation

    NASA Technical Reports Server (NTRS)

    Carati, D.; Jansen, K.; Lund, T.

    1995-01-01

    Since its first application, the dynamic procedure has been recognized as an effective means to compute rather than prescribe the unknown coefficients that appear in a subgrid-scale model for Large-Eddy Simulation (LES). The dynamic procedure is usually used to determine the nondimensional coefficient in the Smagorinsky (1963) model. In reality the procedure is quite general and it is not limited to the Smagorinsky model by any theoretical or practical constraints. The purpose of this note is to consider a generalized family of dynamic eddy viscosity models that do not necessarily rely on the local equilibrium assumption built into the Smagorinsky model. By invoking an inertial range assumption, it will be shown that the coefficients in the new models need not be nondimensional. This additional degree of freedom allows the use of models that are scaled on traditionally unknown quantities such as the dissipation rate. In certain cases, the dynamic models with dimensional coefficients are simpler to implement, and allow for a 30% reduction in the number of required filtering operations.

  5. Effects of subglottal and supraglottal acoustic loading on voice production

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoyan; Mongeau, Luc; Frankel, Steven

    2002-05-01

    Speech production involves sound generation by confined jets through an orifice (the glottis) with a time-varying area. Predictive models are usually based on the quasi-steady assumption. This assumption allows the complex unsteady flows to be treated as steady flows, which are more effectively modeled computationally. Because of the reflective properties of the human lungs, trachea and vocal tract, subglottal and supraglottal resonance and other acoustic effects occur in speech, which might affect glottal impedance, especially in the regime of unsteady flow separation. Changes in the flow structure, or flow regurgitation due to a transient negative transglottal pressure, could also occur. These phenomena may affect the quasi-steady behavior of speech production. To investigate the possible effects of the subglottal and supraglottal acoustic loadings, a dynamic mechanical model of the larynx was designed and built. The subglottal and supraglottal acoustic loadings are simulated using an expansion in the tube upstream of the glottis and a finite length tube downstream, respectively. The acoustic pressures of waves radiated upstream and downstream of the orifice were measured and compared to those predicted using a model based on the quasi-steady assumption. A good agreement between the experimental data and the predictions was obtained for different operating frequencies, flow rates, and orifice shapes. This supports the validity of the quasi-steady assumption for various subglottal and supraglottal acoustic loadings.

  6. Logistic regression of family data from retrospective study designs.

    PubMed

    Whittemore, Alice S; Halpern, Jerry

    2003-11-01

    We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.

  7. A study of reacting free and ducted hydrogen/air jets

    NASA Technical Reports Server (NTRS)

    Beach, H. L., Jr.

    1975-01-01

    The mixing and reaction of a supersonic jet of hydrogen in coaxial free and ducted high temperature test gases were investigated. The importance of chemical kinetics on computed results, and the utilization of free-jet theoretical approaches to compute enclosed flow fields were studied. Measured pitot pressure profiles were correlated by use of a parabolic mixing analysis employing an eddy viscosity model. All computations, including free, ducted, reacting, and nonreacting cases, use the same value of the empirical constant in the viscosity model. Equilibrium and finite rate chemistry models were utilized. The finite rate assumption allowed prediction of observed ignition delay, but the equilibrium model gave the best correlations downstream from the ignition location. Ducted calculations were made with finite rate chemistry; correlations were, in general, as good as the free-jet results until problems with the boundary conditions were encountered.

  8. Comparing process-based breach models for earthen embankments subjected to internal erosion

    USDA-ARS?s Scientific Manuscript database

    Predicting the potential flooding from a dam site requires prediction of outflow resulting from breach. Conservative estimates from the assumption of instantaneous breach or from an upper envelope of historical cases are readily computed, but these estimates do not reflect the properties of a speci...

  9. Detecting Local Item Dependence in Polytomous Adaptive Data

    ERIC Educational Resources Information Center

    Mislevy, Jessica L.; Rupp, Andre A.; Harring, Jeffrey R.

    2012-01-01

    A rapidly expanding arena for item response theory (IRT) is in attitudinal and health-outcomes survey applications, often with polytomous items. In particular, there is interest in computer adaptive testing (CAT). Meeting model assumptions is necessary to realize the benefits of IRT in this setting, however. Although initial investigations of…

  10. A Wittgenstein Approach to the Learning of OO-Modeling

    ERIC Educational Resources Information Center

    Holmboe, Christian

    2004-01-01

    The paper uses Ludwig Wittgenstein's theories about the relationship between thought, language, and objects of the world to explore the assumption that OO-thinking resembles natural thinking. The paper imports from research in linguistic philosophy to computer science education research. I show how UML class diagrams (i.e., an artificial…

  11. IRT-Estimated Reliability for Tests Containing Mixed Item Formats

    ERIC Educational Resources Information Center

    Shu, Lianghua; Schwarz, Richard D.

    2014-01-01

    As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…

  12. Can computational goals inform theories of vision?

    PubMed

    Anderson, Barton L

    2015-04-01

    One of the most lasting contributions of Marr's posthumous book is his articulation of the different "levels of analysis" that are needed to understand vision. Although a variety of work has examined how these different levels are related, there is comparatively little examination of the assumptions on which his proposed levels rest, or the plausibility of the approach Marr articulated given those assumptions. Marr placed particular significance on computational level theory, which specifies the "goal" of a computation, its appropriateness for solving a particular problem, and the logic by which it can be carried out. The structure of computational level theory is inherently teleological: What the brain does is described in terms of its purpose. I argue that computational level theory, and the reverse-engineering approach it inspires, requires understanding the historical trajectory that gave rise to functional capacities that can be meaningfully attributed with some sense of purpose or goal, that is, a reconstruction of the fitness function on which natural selection acted in shaping our visual abilities. I argue that this reconstruction is required to distinguish abilities shaped by natural selection-"natural tasks" -from evolutionary "by-products" (spandrels, co-optations, and exaptations), rather than merely demonstrating that computational goals can be embedded in a Bayesian model that renders a particular behavior or process rational. Copyright © 2015 Cognitive Science Society, Inc.

  13. A computationally fast, reduced model for simulating landslide dynamics and tsunamis generated by landslides in natural terrains

    NASA Astrophysics Data System (ADS)

    Mohammed, F.

    2016-12-01

    Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data for landslides and tsunamis.

  14. Some observations on computer lip-reading: moving from the dream to the reality

    NASA Astrophysics Data System (ADS)

    Bear, Helen L.; Owen, Gari; Harvey, Richard; Theobald, Barry-John

    2014-10-01

    In the quest for greater computer lip-reading performance there are a number of tacit assumptions which are either present in the datasets (high resolution for example) or in the methods (recognition of spoken visual units called "visemes" for example). Here we review these and other assumptions and show the surprising result that computer lip-reading is not heavily constrained by video resolution, pose, lighting and other practical factors. However, the working assumption that visemes, which are the visual equivalent of phonemes, are the best unit for recognition does need further examination. We conclude that visemes, which were defined over a century ago, are unlikely to be optimal for a modern computer lip-reading system.

  15. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  16. Movement and collision of Lagrangian particles in hydro-turbine intakes: a case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero-Gomez, Pedro; Richmond, Marshall C.

    Studies of the stress/survival of migratory fish during downstream passage through operating hydro-turbines are normally conducted to determine the fish-friendliness of units. One field approach consisting of recording extreme hydraulics with autonomous sensors is largely sensitive to the conditions of sensor release and the initial trajectories at the turbine intake. This study applies a modelling strategy based on flow simulations using computational fluid dynamics and Lagrangian particle tracking to represent the travel of live fish and autonomous sensor devices through hydro-turbine intakes. For the flow field calculation, the simulations were conducted with both a time-averaging turbulence model and an eddy-resolvingmore » technique. For the particle tracking calculation, different modelling assumptions for turbulence forcing, mass formulation, buoyancy, and release condition were tested. The modelling assumptions are evaluated with respect to data sets collected using a laboratory physical model and an autonomous sensor device deployed at Ice Harbor Dam (Snake River, State of Washington, U.S.A.) at the same discharge and release point as in the present computer simulations. We found an acceptable agreement between the simulated results and observed data and discuss relevant features of Lagrangian particle movement that are critical in turbine design and in the experimental design of field studies.« less

  17. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  18. Unsteady flow model for circulation-control airfoils

    NASA Technical Reports Server (NTRS)

    Rao, B. M.

    1979-01-01

    An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.

  19. Understanding Islamist political violence through computational social simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Jennifer H; Mackerrow, Edward P; Patelli, Paolo G

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates themore » computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.« less

  20. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  1. Surface temperature distribution of GTA weld pools on thin-plate 304 stainless steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; David, S.A.; Vitek, J.M.

    1995-11-01

    A transient multidimensional computational model was utilized to study gas tungsten arc (GTA) welding of thin-plate 304 stainless steel (SS). The model eliminates several of the earlier restrictive assumptions including temperature-independent thermal-physical properties. Consequently, all important thermal-physical properties were considered as temperature dependent throughout the range of temperatures experienced by the weld metal. The computational model was used to predict surface temperature distribution of the GTA weld pools in 1.5-mm-thick AISI 304 SS. The welding parameters were chosen so as to correspond with an earlier experimental study that produced high-resolution surface temperature maps. One of the motivations of the presentmore » study was to verify the predictive capability of the computational model. Comparison of the numerical predictions and experimental observations indicate excellent agreement, thereby verifying the model.« less

  2. Effect of initial conditions and of intra-event rainfall intensity variability on shallow landslide triggering return period

    NASA Astrophysics Data System (ADS)

    Peres, David Johnny; Cancelliere, Antonino

    2016-04-01

    Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.

  3. Modeling of the illumination driven coma of 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Bieler, André

    2015-04-01

    In this paper we present results modeling 67P/Churyumov-Gerasimenko's (C-G) neutral coma properties observed by the Rosetta ROSINA experiment with 3 different model approaches. The basic assumption for all models is the idea that the out-gassing properties of C-G are mainly illumination driven. With this assumption all models are capable of reproducing most features in the neutral coma signature as detected by the ROSINA-COPS instrument over several months. The models include the realistic shape model of the nucleus to calculate the illumination conditions over time which are used to define the boundary conditions for the hydrodynamic (BATS-R-US code) and the Direct Simulation Monte Carlo (AMPS code) simulations. The third model finally computes the projection of the total illumination on the comet surface towards the spacecraft. Our results indicate that at large heliocentric distances (3.5 to 2.8 AU) most gas coma structures observed by the in-situ instruments can be explained by uniformly distributed activity regions spread over the whole nucleus surface.

  4. Reconstruction of electrocardiogram using ionic current models for heart muscles.

    PubMed

    Yamanaka, A; Okazaki, K; Urushibara, S; Kawato, M; Suzuki, R

    1986-11-01

    A digital computer model is presented for the simulation of the electrocardiogram during ventricular activation and repolarization (QRS-T waves). The part of the ventricular septum and the left ventricular free wall of the heart are represented by a two dimensional array of 730 homogeneous functional units. Ionic currents models are used to determine the spatial distribution of the electrical activities of these units at each instant of time during simulated cardiac cycle. In order to reconstruct the electrocardiogram, the model is expanded three-dimensionally with equipotential assumption along the third axis and then the surface potentials are calculated using solid angle method. Our digital computer model can be used to improve the understanding of the relationship between body surface potentials and intracellular electrical events.

  5. Two-dimensional CFD modeling of the heat and mass transfer process during sewage sludge drying in a solar dryer

    NASA Astrophysics Data System (ADS)

    Krawczyk, Piotr; Badyda, Krzysztof

    2011-12-01

    The paper presents key assumptions of the mathematical model which describes heat and mass transfer phenomena in a solar sewage drying process, as well as techniques used for solving this model with the Fluent computational fluid dynamics (CFD) software. Special attention was paid to implementation of boundary conditions on the sludge surface, which is a physical boundary between the gaseous phase - air, and solid phase - dried matter. Those conditions allow to model heat and mass transfer between the media during first and second drying stages. Selection of the computational geometry is also discussed - it is a fragment of the entire drying facility. Selected modelling results are presented in the final part of the paper.

  6. Patient flow within UK emergency departments: a systematic review of the use of computer simulation modelling methods

    PubMed Central

    Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos

    2017-01-01

    Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459

  7. On the Genealogy of Asexual Diploids

    NASA Astrophysics Data System (ADS)

    Lam, Fumei; Langley, Charles H.; Song, Yun S.

    Given molecular genetic data from diploid individuals that, at present, reproduce mostly or exclusively asexually without recombination, an important problem in evolutionary biology is detecting evidence of past sexual reproduction (i.e., meiosis and mating) and recombination (both meiotic and mitotic). However, currently there is a lack of computational tools for carrying out such a study. In this paper, we formulate a new problem of reconstructing diploid genealogies under the assumption of no sexual reproduction or recombination, with the ultimate goal being to devise genealogy-based tools for testing deviation from these assumptions. We first consider the infinite-sites model of mutation and develop linear-time algorithms to test the existence of an asexual diploid genealogy compatible with the infinite-sites model of mutation, and to construct one if it exists. Then, we relax the infinite-sites assumption and develop an integer linear programming formulation to reconstruct asexual diploid genealogies with the minimum number of homoplasy (back or recurrent mutation) events. We apply our algorithms on simulated data sets with sizes of biological interest.

  8. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  9. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.

    1992-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.

  10. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  11. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  12. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  13. Jet Noise Diagnostics Supporting Statistical Noise Prediction Methods

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2006-01-01

    The primary focus of my presentation is the development of the jet noise prediction code JeNo with most examples coming from the experimental work that drove the theoretical development and validation. JeNo is a statistical jet noise prediction code, based upon the Lilley acoustic analogy. Our approach uses time-average 2-D or 3-D mean and turbulent statistics of the flow as input. The output is source distributions and spectral directivity. NASA has been investing in development of statistical jet noise prediction tools because these seem to fit the middle ground that allows enough flexibility and fidelity for jet noise source diagnostics while having reasonable computational requirements. These tools rely on Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) solutions as input for computing far-field spectral directivity using an acoustic analogy. There are many ways acoustic analogies can be created, each with a series of assumptions and models, many often taken unknowingly. And the resulting prediction can be easily reverse-engineered by altering the models contained within. However, only an approach which is mathematically sound, with assumptions validated and modeled quantities checked against direct measurement will give consistently correct answers. Many quantities are modeled in acoustic analogies precisely because they have been impossible to measure or calculate, making this requirement a difficult task. The NASA team has spent considerable effort identifying all the assumptions and models used to take the Navier-Stokes equations to the point of a statistical calculation via an acoustic analogy very similar to that proposed by Lilley. Assumptions have been identified and experiments have been developed to test these assumptions. In some cases this has resulted in assumptions being changed. Beginning with the CFD used as input to the acoustic analogy, models for turbulence closure used in RANS CFD codes have been explored and compared against measurements of mean and rms velocity statistics over a range of jet speeds and temperatures. Models for flow parameters used in the acoustic analogy, most notably the space-time correlations of velocity, have been compared against direct measurements, and modified to better fit the observed data. These measurements have been extremely challenging for hot, high speed jets, and represent a sizeable investment in instrumentation development. As an intermediate check that the analysis is predicting the physics intended, phased arrays have been employed to measure source distributions for a wide range of jet cases. And finally, careful far-field spectral directivity measurements have been taken for final validation of the prediction code. Examples of each of these experimental efforts will be presented. The main result of these efforts is a noise prediction code, named JeNo, which is in middevelopment. JeNo is able to consistently predict spectral directivity, including aft angle directivity, for subsonic cold jets of most geometries. Current development on JeNo is focused on extending its capability to hot jets, requiring inclusion of a previously neglected second source associated with thermal fluctuations. A secondary result of the intensive experimentation is the archiving of various flow statistics applicable to other acoustic analogies and to development of time-resolved prediction methods. These will be of lasting value as we look ahead at future challenges to the aeroacoustic experimentalist.

  14. Radiation calculation in non-equilibrium shock layer

    NASA Astrophysics Data System (ADS)

    Dubois, Joanne

    2005-05-01

    The purpose of the work was to investigate confidence in radiation predictions on an entry probe body in high temperature conditions taking the Huygens probe as an example. Existing engineering flowfield codes for shock tube and blunt body simulations were used and updated when necessary to compute species molar fractions and flow field parameters. An interface to the PARADE radiation code allowed radiative emission estimates to the body surface to be made. A validation of the radiative models in equilibrium conditions was first made with published data and by comparison with shock tube test case data from the IUSTI TCM2 facility with Titan like atmosphere test gas. Further verifications were made in non-equilibrium with published computations. These comparisons were initially made using a Boltzmann assumption for the electronic states of CN. An attempt was also made to use pseudo species for the individual electronic states of CN. Assumptions made in this analysis are described and a further comparison with shock tube data undertaken. Several CN radiation datasets have been used, and while improvements to the modelling tools have been made, it seems that considerable uncertainty remains in the modelling of the non-equilibrium emission using simple engineering methods.

  15. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-12-31

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less

  16. Evaluation of the fishery status for King Soldier Bream Argyrops spinifer in Pakistan using the software CEDA and ASPIC

    NASA Astrophysics Data System (ADS)

    Memon, Aamir Mahmood; Liu, Qun; Memon, Khadim Hussain; Baloch, Wazir Ali; Memon, Asfandyar; Baset, Abdul

    2015-07-01

    Catch and effort data were analyzed to estimate the maximum sustainable yield (MSY) of King Soldier Bream, Argyrops spinifer (Forsskål, 1775, Family: Sparidae), and to evaluate the present status of the fish stocks exploited in Pakistani waters. The catch and effort data for the 25-years period 1985-2009 were analyzed using two computer software packages, CEDA (catch and effort data analysis) and ASPIC (a surplus production model incorporating covariates). The maximum catch of 3 458 t was observed in 1988 and the minimum catch of 1 324 t in 2005, while the average annual catch of A. spinifer over the 25 years was 2 500 t. The surplus production models of Fox, Schaefer, and Pella Tomlinson under three error assumptions of normal, log-normal and gamma are in the CEDA package and the two surplus models of Fox and logistic are in the ASPIC package. In CEDA, the MSY was estimated by applying the initial proportion (IP) of 0.8, because the starting catch was approximately 80% of the maximum catch. Except for gamma, because gamma showed maximization failures, the estimated results of MSY using CEDA with the Fox surplus production model and two error assumptions, were 1 692.08 t ( R 2=0.572) and 1 694.09 t ( R 2=0.606), respectively, and from the Schaefer and the Pella Tomlinson models with two error assumptions were 2 390.95 t ( R 2=0.563), and 2 380.06 t ( R 2=0.605), respectively. The MSY estimated by the Fox model was conservatively compared to the Schaefer and Pella Tomlinson models. The MSY values from Schaefer and Pella Tomlinson models were the same. The computed values of MSY using the ASPIC computer software program with the two surplus production models of Fox and logistic were 1 498 t ( R 2=0.917), and 2 488 t ( R 2=0.897) respectively. The estimated values of MSY using CEDA were about 1 700-2 400 t and the values from ASPIC were 1 500-2 500 t. The estimates output by the CEDA and the ASPIC packages indicate that the stock is overfished, and needs some effective management to reduce the fishing effort of the species in Pakistani waters.

  17. A Methodology for Model Comparison Using the Theater Simulation of Airbase Resources and All Mobile Tactical Air Force Models

    DTIC Science & Technology

    1992-09-01

    ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper

  18. Boundary-layer computational model for predicting the flow and heat transfer in sudden expansions

    NASA Technical Reports Server (NTRS)

    Lewis, J. P.; Pletcher, R. H.

    1986-01-01

    Fully developed turbulent and laminar flows through symmetric planar and axisymmetric expansions with heat transfer were modeled using a finite-difference discretization of the boundary-layer equations. By using the boundary-layer equations to model separated flow in place of the Navier-Stokes equations, computational effort was reduced permitting turbulence modelling studies to be economically carried out. For laminar flow, the reattachment length was well predicted for Reynolds numbers as low as 20 and the details of the trapped eddy were well predicted for Reynolds numbers above 200. For turbulent flows, the Boussinesq assumption was used to express the Reynolds stresses in terms of a turbulent viscosity. Near-wall algebraic turbulence models based on Prandtl's-mixing-length model and the maximum Reynolds shear stress were compared.

  19. Standardized input for Hanford environmental impact statements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.

    1981-05-01

    Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less

  20. Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy.

    PubMed

    Morimoto, Satoshi; Remijn, Gerard B; Nakajima, Yoshitaka

    2016-01-01

    Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.

  1. Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy

    PubMed Central

    Morimoto, Satoshi; Remijn, Gerard B.; Nakajima, Yoshitaka

    2016-01-01

    Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord. PMID:27003807

  2. Surface-roughness considerations for atmospheric correction of ocean color sensors. I: The Rayleigh-scattering component.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.

  3. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  4. Segmenting words from natural speech: subsegmental variation in segmental cues.

    PubMed

    Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric

    2010-06-01

    Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.

  5. Spiral Growth in Plants: Models and Simulations

    ERIC Educational Resources Information Center

    Allen, Bradford D.

    2004-01-01

    The analysis and simulation of spiral growth in plants integrates algebra and trigonometry in a botanical setting. When the ideas presented here are used in a mathematics classroom/computer lab, students can better understand how basic assumptions about plant growth lead to the golden ratio and how the use of circular functions leads to accurate…

  6. RVR Meander – Migration of meandering rivers in homogeneous and heterogeneous floodplains using physically-based bank erosion

    USDA-ARS?s Scientific Manuscript database

    The RVR Meander platform for computing long-term meandering-channel migration is presented, together with a method for planform migration based on the modeling of the streambank erosion processes of hydraulic erosion and mass failure. An application to a real-world river, with assumption of homogene...

  7. United States Air Force Training Line Simulator. Final Report.

    ERIC Educational Resources Information Center

    Nauta, Franz; Pierce, Michael B.

    This report describes the technical aspects and potential applications of a computer-based model simulating the flow of airmen through basic training and entry-level technical training. The objective of the simulation is to assess the impacts of alternative recruit classification and training policies under a wide variety of assumptions regarding…

  8. Learning about Language and Learners from Computer Programs

    ERIC Educational Resources Information Center

    Cobb, Tom

    2010-01-01

    Making Nation's text analysis software accessible via the World Wide Web has opened up an exploration of how his learning principles can best be realized in practice. This paper discusses 3 representative episodes in the ongoing exploration. The first concerns an examination of the assumptions behind modeling what texts look like to learners with…

  9. High Speed Cylindrical Roller Bearing Analysis, SKF Computer Program CYBEAN. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Pirvics, J.

    1978-01-01

    The CYBEAN (CYlindrical BEaring ANalysis) program was created to detail radially loaded, aligned and misaligned Cylindrical roller bearing performance under a variety of operating conditions. The models and associated mathematics used within CYBEAN are described. The user is referred to the material for formulation assumptions and algorithm detail.

  10. SSL: A Theory of How People Learn to Select Strategies

    ERIC Educational Resources Information Center

    Rieskamp, Jorg; Otto, Philipp E.

    2006-01-01

    The assumption that people possess a repertoire of strategies to solve the inference problems they face has been raised repeatedly. However, a computational model specifying how people select strategies from their repertoire is still lacking. The proposed strategy selection learning (SSL) theory predicts a strategy selection process on the basis…

  11. Immediate Effects of Body Checking Behaviour on Negative and Positive Emotions in Women with Eating Disorders: An Ecological Momentary Assessment Approach.

    PubMed

    Kraus, Nicole; Lindenberg, Julia; Zeeck, Almut; Kosfelder, Joachim; Vocks, Silja

    2015-09-01

    Cognitive-behavioural models of eating disorders state that body checking arises in response to negative emotions in order to reduce the aversive emotional state and is therefore negatively reinforced. This study empirically tests this assumption. For a seven-day period, women with eating disorders (n = 26) and healthy controls (n = 29) were provided with a handheld computer for assessing occurring body checking strategies as well as negative and positive emotions. Serving as control condition, randomized computer-emitted acoustic signals prompted reports on body checking and emotions. There was no difference in the intensity of negative emotions before body checking and in control situations across groups. However, from pre- to post-body checking, an increase in negative emotions was found. This effect was more pronounced in women with eating disorders compared with healthy controls. Results are contradictory to the assumptions of the cognitive-behavioural model, as body checking does not seem to reduce negative emotions. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  12. Study of CPM Device used for Rehabilitation and Effective Pain Management Following Knee Alloplasty

    NASA Astrophysics Data System (ADS)

    Trochimczuk, R.; Kuźmierowski, T.; Anchimiuk, P.

    2017-02-01

    This paper defines the design assumptions for the construction of an original demonstration of a CPM device, based on which a solid virtual model will be created in a CAD software environment. The overall dimensions and other input parameters for the design were determined for the entire patient population according to an anatomical atlas of human measures. The medical and physiotherapeutic community were also consulted with respect to the proposed engineering solutions. The virtual model of the CPM device that will be created will be used for computer simulations of changes in motion parameters as a function of time, accounting for loads and static states. The results obtained from computer simulation will be used to confirm the correctness of the design adopted assumptions and of the accepted structure of the CPM mechanism, and potentially to introduce necessary corrections. They will also provide a basis for the development of a control strategy for the laboratory prototype and for the selection of the strategy of the patient's rehabilitation in the future. This paper will be supplemented with identification of directions of further research.

  13. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  14. Computational Modeling and Validation for Hypersonic Inlets

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1996-01-01

    Hypersonic inlet research activity at NASA is reviewed. The basis for the paper is the experimental tests performed with three inlets: the NASA Lewis Research Center Mach 5, the McDonnell Douglas Mach 12, and the NASA Langley Mach 18. Both three-dimensional PNS and NS codes have been used to compute the flow within the three inlets. Modeling assumptions in the codes involve the turbulence model, the nature of the boundary layer, shock wave-boundary layer interaction, and the flow spilled to the outside of the inlet. Use of the codes and the experimental data are helping to develop a clearer understanding of the inlet flow physics and to focus on the modeling improvements required in order to arrive at validated codes.

  15. Statistical models of lunar rocks and regolith

    NASA Technical Reports Server (NTRS)

    Marcus, A. H.

    1973-01-01

    The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.

  16. Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; King, R. L.

    1977-01-01

    The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.

  17. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  18. Nursing opinion leadership: a preliminary model derived from philosophic theories of rational belief.

    PubMed

    Anderson, Christine A; Whall, Ann L

    2013-10-01

    Opinion leaders are informal leaders who have the ability to influence others' decisions about adopting new products, practices or ideas. In the healthcare setting, the importance of translating new research evidence into practice has led to interest in understanding how opinion leaders could be used to speed this process. Despite continued interest, gaps in understanding opinion leadership remain. Agent-based models are computer models that have proven to be useful for representing dynamic and contextual phenomena such as opinion leadership. The purpose of this paper is to describe the work conducted in preparation for the development of an agent-based model of nursing opinion leadership. The aim of this phase of the model development project was to clarify basic assumptions about opinions, the individual attributes of opinion leaders and characteristics of the context in which they are effective. The process used to clarify these assumptions was the construction of a preliminary nursing opinion leader model, derived from philosophical theories about belief formation. © 2013 John Wiley & Sons Ltd.

  19. One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Farhat, Hamidullah

    1990-01-01

    Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.

  20. Impact of unseen assumptions on communication of atmospheric carbon mitigation options

    NASA Astrophysics Data System (ADS)

    Elliot, T. R.; Celia, M. A.; Court, B.

    2010-12-01

    With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.

  1. Turbulence Model Predictions of Strongly Curved Flow in a U-Duct

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.; Morrison, Joseph H.

    2000-01-01

    The ability of three types of turbulence models to accurately predict the effects of curvature on the flow in a U-duct is studied. An explicit algebraic stress model performs slightly better than one- or two-equation linear eddy viscosity models, although it is necessary to fully account for the variation of the production-to-dissipation-rate ratio in the algebraic stress model formulation. In their original formulations, none of these turbulence models fully captures the suppressed turbulence near the convex wall, whereas a full Reynolds stress model does. Some of the underlying assumptions used in the development of algebraic stress models are investigated and compared with the computed flowfield from the full Reynolds stress model. Through this analysis, the assumption of Reynolds stress anisotropy equilibrium used in the algebraic stress model formulation is found to be incorrect in regions of strong curvature. By the accounting for the local variation of the principal axes of the strain rate tensor, the explicit algebraic stress model correctly predicts the suppressed turbulence in the outer part of the boundary layer near the convex wall.

  2. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  3. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  4. Improving finite element results in modeling heart valve mechanics.

    PubMed

    Earl, Emily; Mohammadi, Hadi

    2018-06-01

    Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.

  5. An improved panel method for the solution of three-dimensional leading-edge vortex flows. Volume 1: Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Lu, P.; Tinoco, E. N.

    1980-01-01

    An improved panel method for the solution of three dimensional flow and wing and wing-body combinations with leading edge vortex separation is presented. The method employs a three dimensional inviscid flow model in which the configuration, the rolled-up vortex sheets, and the wake are represented by quadratic doublet distributions. The strength of the singularity distribution as well as shape and position of the vortex spirals are computed in an iterative fashion starting with an assumed initial sheet geometry. The method calculates forces and moments as well as detail surface pressure distributions. Improvements include the implementation of improved panel numerics for the purpose of elimination the highly nonlinear effects of ring vortices around double panel edges, and the development of a least squares procedure for damping vortex sheet geometry update instabilities. A complete description of the method is included. A variety of cases generated by the computer program implementing the method are presented which verify the mathematical assumptions of the method and which compare computed results with experimental data to verify the underlying physical assumptions made by the method.

  6. NASA's Integrated Instrument Simulator Suite for Atmospheric Remote Sensing from Spaceborne Platforms (ISSARS) and Its Role for the ACE and GPM Missions

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Tao, Wei-Kuo; Hostetler, Chris; Kuo, Kwo-Sen; Matsui, Toshihisa; Jacob, Joseph C.; Niamsuwam, Noppasin; Johnson, Michael P.; Hair, John; Butler, Carolyn; hide

    2011-01-01

    Forward simulation is an indispensable tool for evaluation of precipitation retrieval algorithms as well as for studying snow/ice microphysics and their radiative properties. The main challenge of the implementation arises due to the size of the problem domain. To overcome this hurdle, assumptions need to be made to simplify compiles cloud microphysics. It is important that these assumptions are applied consistently throughout the simulation process. ISSARS addresses this issue by providing a computationally efficient and modular framework that can integrate currently existing models and is also capable of expanding for future development. ISSARS is designed to accommodate the simulation needs of the Aerosol/Clouds/Ecosystems (ACE) mission and the Global Precipitation Measurement (GPM) mission: radars, microwave radiometers, and optical instruments such as lidars and polarimeter. ISSARS's computation is performed in three stages: input reconditioning (IRM), electromagnetic properties (scattering/emission/absorption) calculation (SEAM), and instrument simulation (ISM). The computation is implemented as a web service while its configuration can be accessed through a web-based interface.

  7. Computational Modeling of Low-Density Ultracold Plasmas

    NASA Astrophysics Data System (ADS)

    Witte, Craig

    In this dissertation I describe a number of different computational investigations which I have undertaken during my time at Colorado State University. Perhaps the most significant of my accomplishments was the development of a general molecular dynamic model that simulates a wide variety of physical phenomena in ultracold plasmas (UCPs). This model formed the basis of most of the numerical investigations discussed in this thesis. The model utilized the massively parallel architecture of GPUs to achieve significant computing speed increases (up to 2 orders of magnitude) above traditional single core computing. This increased computing power allowed for each particle in an actual UCP experimental system to be explicitly modeled in simulations. By using this model, I was able to undertake a number of theoretical investigations into ultracold plasma systems. Chief among these was our lab's investigation of electron center-of-mass damping, in which the molecular dynamics model was an essential tool in interpreting the results of the experiment. Originally, it was assumed that this damping would solely be a function of electron-ion collisions. However, the model was able to identify an additional collisionless damping mechanism that was determined to be significant in the first iteration of our experiment. To mitigate this collisionless damping, the model was used to find a new parameter range where this mechanism was negligible. In this new parameter range, the model was an integral part in verifying the achievement of a record low measured UCP electron temperature of 1.57 +/- 0.28K and a record high electron strong coupling parameter, Gamma, of 0.35 +/-0.08$. Additionally, the model, along with experimental measurements, was used to verify the breakdown of the standard weak coupling approximation for Coulomb collisions. The general molecular dynamics model was also used in other contexts. These included the modeling of both the formation process of ultracold plasmas and the thermalization of the electron component of an ultracold plasma. Our modeling of UCP formation is still in its infancy, and there is still much outstanding work. However, we have already discovered a previously unreported electron heating mechanism that arises from an external electric field being applied during UCP formation. Thermalization modeling showed that the ion density distribution plays a role in the thermalization of electrons in ultracold plasma, a consideration not typically included in plasma modeling. A Gaussian ion density distribution was shown to lead to a slightly faster electron thermalization rate than an equivalent uniform ion density distribution as a result of collisionless effects. Three distinct phases of UCP electron thermalization during formation were identified. Finally, the dissertation will describe additional computational investigations that preceded the general molecular dynamics model. These include simulations of ultracold plasma ion expansion driven by non-neutrality, as well as an investigation into electron evaporation. To test the effects of non-neutrality on ion expansion, a numerical model was developed that used the King model of the electron to describe the electron distribution for an arbitrary charge imbalance. The model found that increased non-neutrality of the plasma led to the rapid expansion of ions on the plasma exterior, which in turn led to a sharp ion cliff-like spatial structure. Additionally, this rapid expansion led to additional cooling of the electron component of the plasma. The evaporation modeling was used to test the underlying assumptions of previously developed analytical expression for charged particle evaporation. The model used Monte Carlo techniques to simulate the collisions and the evaporation process. The model found that neither of the underlying assumption of the charged particle evaporation expressions held true for typical ultracold plasma parameters and provides a route for computations in spite of the breakdown of these two typical assumptions.

  8. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  9. The continuous adjoint approach to the k-ε turbulence model for shape optimization and optimal active control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Papoutsis-Kiachagias, E. M.; Zymaris, A. S.; Kavvadias, I. S.; Papadimitriou, D. I.; Giannakoglou, K. C.

    2015-03-01

    The continuous adjoint to the incompressible Reynolds-averaged Navier-Stokes equations coupled with the low Reynolds number Launder-Sharma k-ε turbulence model is presented. Both shape and active flow control optimization problems in fluid mechanics are considered, aiming at minimum viscous losses. In contrast to the frequently used assumption of frozen turbulence, the adjoint to the turbulence model equations together with appropriate boundary conditions are derived, discretized and solved. This is the first time that the adjoint equations to the Launder-Sharma k-ε model have been derived. Compared to the formulation that neglects turbulence variations, the impact of additional terms and equations is evaluated. Sensitivities computed using direct differentiation and/or finite differences are used for comparative purposes. To demonstrate the need for formulating and solving the adjoint to the turbulence model equations, instead of merely relying upon the 'frozen turbulence assumption', the gain in the optimization turnaround time offered by the proposed method is quantified.

  10. Capture-recapture studies for multiple strata including non-markovian transitions

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.; Pollock, K.H.; Hestbeck, J.B.

    1993-01-01

    We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reduced parameter versions of this model. We also introduce models that relax the Markovian assumption and allow 'memory' to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we sugg st an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.

  11. Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-01-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less

  12. Analysis of satellite servicing cost benefits

    NASA Technical Reports Server (NTRS)

    Builteman, H. O.

    1982-01-01

    Under the auspices of NASA/JSC a methodology was developed to estimate the value of satellite servicing to the user community. Time and funding precluded the development of an exhaustive computer model; instead, the concept of Design Reference Missions was involved. In this approach, three space programs were analyzed for various levels of servicing. The programs selected fall into broad categories which include 80 to 90% of the missions planned between now and the end of the century. Of necessity, the extrapolation of the three program analyses to the user community as a whole depends on an average mission model and equivalency projections. The value of the estimated cost benefits based on this approach depends largely on how well the equivalency assumptions and the mission model match the real world. A careful definition of all assumptions permits the analysis to be extended to conditions beyond the scope of this study.

  13. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  14. U.S. Nuclear Weapons Enterprise: A Strategic Past and Unknown Future

    DTIC Science & Technology

    2012-04-25

    are left to base their planning assumptions, weapons designs and capabilities on outdated models . The likelihood of a large-scale nuclear war has...conduct any testing on nuclear weapons and must rely on computer modeling . While this may provide sufficient confidence in the current nuclear...unlikely the world will be free of nuclear weapons. 24 APPENDIX A – Acronyms ACC – Air Combat Command ACM – Advanced cruise missle CSAF

  15. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  16. The Impact of Item Position Change on Item Parameters and Common Equating Results under the 3PL Model

    ERIC Educational Resources Information Center

    Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet

    2012-01-01

    Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…

  17. Trusted computation through biologically inspired processes

    NASA Astrophysics Data System (ADS)

    Anderson, Gustave W.

    2013-05-01

    Due to supply chain threats it is no longer a reasonable assumption that traditional protections alone will provide sufficient security for enterprise systems. The proposed cognitive trust model architecture extends the state-of-the-art in enterprise anti-exploitation technologies by providing collective immunity through backup and cross-checking, proactive health monitoring and adaptive/autonomic threat response, and network resource diversity.

  18. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE PAGES

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; ...

    2016-11-11

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  19. An equivalent dissipation rate model for capturing history effects in non-premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang

    The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less

  20. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  1. A Testbed for Model Development

    NASA Astrophysics Data System (ADS)

    Berry, J. A.; Van der Tol, C.; Kornfeld, A.

    2014-12-01

    Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.

  2. CFD simulation of flow through heart: a perspective review.

    PubMed

    Khalafvand, S S; Ng, E Y K; Zhong, L

    2011-01-01

    The heart is an organ which pumps blood around the body by contraction of muscular wall. There is a coupled system in the heart containing the motion of wall and the motion of blood fluid; both motions must be computed simultaneously, which make biological computational fluid dynamics (CFD) difficult. The wall of the heart is not rigid and hence proper boundary conditions are essential for CFD modelling. Fluid-wall interaction is very important for real CFD modelling. There are many assumptions for CFD simulation of the heart that make it far from a real model. A realistic fluid-structure interaction modelling the structure by the finite element method and the fluid flow by CFD use more realistic coupling algorithms. This type of method is very powerful to solve the complex properties of the cardiac structure and the sensitive interaction of fluid and structure. The final goal of heart modelling is to simulate the total heart function by integrating cardiac anatomy, electrical activation, mechanics, metabolism and fluid mechanics together, as in the computational framework.

  3. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Bacon, John B.; Matney, Mark

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine one of these theoretical assumptions.. This study employs empirical and theoretical information to test the assumption of a fully random decay along the argument of latitude of the final orbit, and makes recommendations how to improve the accuracy of this calculation in the future.

  4. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  5. Semianalytical computation of path lines for finite-difference models

    USGS Publications Warehouse

    Pollock, D.W.

    1988-01-01

    A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author

  6. Single Cell Genomics: Approaches and Utility in Immunology

    PubMed Central

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-01-01

    Single cell genomics offers powerful tools for studying lymphocytes, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population-level. Advances in computer science and single cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single cell RNA-seq data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. PMID:28094102

  7. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  8. On the assumptions underlying milestoning.

    PubMed

    Vanden-Eijnden, Eric; Venturoli, Maddalena; Ciccotti, Giovanni; Elber, Ron

    2008-11-07

    Milestoning is a procedure to compute the time evolution of complicated processes such as barrier crossing events or long diffusive transitions between predefined states. Milestoning reduces the dynamics to transition events between intermediates (the milestones) and computes the local kinetic information to describe these transitions via short molecular dynamics (MD) runs between the milestones. The procedure relies on the ability to reinitialize MD trajectories on the milestones to get the right kinetic information about the transitions. It also rests on the assumptions that the transition events between successive milestones and the time lags between these transitions are statistically independent. In this paper, we analyze the validity of these assumptions. We show that sets of optimal milestones exist, i.e., sets such that successive transitions are indeed statistically independent. The proof of this claim relies on the results of transition path theory and uses the isocommittor surfaces of the reaction as milestones. For systems in the overdamped limit, we also obtain the probability distribution to reinitialize the MD trajectories on the milestones, and we discuss why this distribution is not available in closed form for systems with inertia. We explain why the time lags between transitions are not statistically independent even for optimal milestones, but we show that working with such milestones allows one to compute mean first passage times between milestones exactly. Finally, we discuss some practical implications of our results and we compare milestoning with Markov state models in view of our findings.

  9. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Computational mate choice: theory and empirical evidence.

    PubMed

    Castellano, Sergio; Cadeddu, Giorgia; Cermelli, Paolo

    2012-06-01

    The present review is based on the thesis that mate choice results from information-processing mechanisms governed by computational rules and that, to understand how females choose their mates, we should identify which are the sources of information and how they are used to make decisions. We describe mate choice as a three-step computational process and for each step we present theories and review empirical evidence. The first step is a perceptual process. It describes the acquisition of evidence, that is, how females use multiple cues and signals to assign an attractiveness value to prospective mates (the preference function hypothesis). The second step is a decisional process. It describes the construction of the decision variable (DV), which integrates evidence (private information by direct assessment), priors (public information), and value (perceived utility) of prospective mates into a quantity that is used by a decision rule (DR) to produce a choice. We make the assumption that females are optimal Bayesian decision makers and we derive a formal model of DV that can explain the effects of preference functions, mate copying, social context, and females' state and condition on the patterns of mate choice. The third step of mating decision is a deliberative process that depends on the DRs. We identify two main categories of DRs (absolute and comparative rules), and review the normative models of mate sampling tactics associated to them. We highlight the limits of the normative approach and present a class of computational models (sequential-sampling models) that are based on the assumption that DVs accumulate noisy evidence over time until a decision threshold is reached. These models force us to rethink the dichotomy between comparative and absolute decision rules, between discrimination and recognition, and even between rational and irrational choice. Since they have a robust biological basis, we think they may represent a useful theoretical tool for behavioural ecologist interested in integrating proximate and ultimate causes of mate choice. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Adaptive Modeling of the International Space Station Electrical Power System

    NASA Technical Reports Server (NTRS)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  12. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  13. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  14. Analysis of partially observed clustered data using generalized estimating equations and multiple imputation

    PubMed Central

    Aloisio, Kathryn M.; Swanson, Sonja A.; Micali, Nadia; Field, Alison; Horton, Nicholas J.

    2015-01-01

    Clustered data arise in many settings, particularly within the social and biomedical sciences. As an example, multiple–source reports are commonly collected in child and adolescent psychiatric epidemiologic studies where researchers use various informants (e.g. parent and adolescent) to provide a holistic view of a subject’s symptomatology. Fitzmaurice et al. (1995) have described estimation of multiple source models using a standard generalized estimating equation (GEE) framework. However, these studies often have missing data due to additional stages of consent and assent required. The usual GEE is unbiased when missingness is Missing Completely at Random (MCAR) in the sense of Little and Rubin (2002). This is a strong assumption that may not be tenable. Other options such as weighted generalized estimating equations (WEEs) are computationally challenging when missingness is non–monotone. Multiple imputation is an attractive method to fit incomplete data models while only requiring the less restrictive Missing at Random (MAR) assumption. Previously estimation of partially observed clustered data was computationally challenging however recent developments in Stata have facilitated their use in practice. We demonstrate how to utilize multiple imputation in conjunction with a GEE to investigate the prevalence of disordered eating symptoms in adolescents reported by parents and adolescents as well as factors associated with concordance and prevalence. The methods are motivated by the Avon Longitudinal Study of Parents and their Children (ALSPAC), a cohort study that enrolled more than 14,000 pregnant mothers in 1991–92 and has followed the health and development of their children at regular intervals. While point estimates were fairly similar to the GEE under MCAR, the MAR model had smaller standard errors, while requiring less stringent assumptions regarding missingness. PMID:25642154

  15. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  16. A New Computational Methodology for Structural Dynamics Problems

    DTIC Science & Technology

    2008-04-01

    by approximating the geometry of the midsurface of the shell (as in continuum-based finite element models), are prevented from the beginning...iiθ , such that the surface 03=θ defines the midsurface ( )R tM M of the region ( )R tB B . The coordinate 3θ is the measure of the distance...assumption for the shell model: “the displacement field is considered as a linear expansion of the thickness coordinate around the midsurface . The

  17. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  18. Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles

    NASA Technical Reports Server (NTRS)

    Morris, S. J.

    1978-01-01

    The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.

  19. Neurobiological roots of language in primate audition: common computational properties.

    PubMed

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A dynamic subgrid scale model for Large Eddy Simulations based on the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2017-11-01

    The development of reduced models for complex multiscale problems remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al. [1], which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a new M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the 'dynamic-MZ-τ' model, is that it is parameter-free and has a structural form imposed by the mathematics of the coarse-graining process (rather than the phenomenological assumptions made by the modeler, such as in classical subgrid scale models). To promote the applicability of M-Z models in general, two procedures are presented to compute the resulting model form, helping to bypass the tedious error-prone algebra that has proven to be a hindrance to the construction of M-Z-based models for complex dynamical systems. While the new formulation is applicable to the solution of general partial differential equations, demonstrations are presented in the context of Large Eddy Simulation closures for the Burgers equation, decaying homogeneous turbulence, and turbulent channel flow. The performance of the model and validity of the underlying assumptions are investigated in detail.

  1. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a simplified version of this model and the corresponding numerical scheme we prove (in addition to stability) convergence and existence of solutions as by-product . Throughout this dissertation, we will provide numerical experiments, not only to validate mathematical results, but also to help the reader gain a qualitative understanding of the PDE models analyzed in this dissertation (the MNSE, the Rosenweig's model, and the Two-phase model). In addition, we also provide computational experiments to illustrate the potential of these simple models and their ability to capture basic phenomenological features of ferrofluids, such as the Rosensweig instability for the case of the two-phase model. In this respect, we highlight the incisive numerical experiments with the two-phase model illustrating the critical role of the demagnetizing field to reproduce physically realistic behavior of ferrofluids.

  2. Data needs for X-ray astronomy satellites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, T.

    I review the current status of atomic data for X-ray astronomy satellites. This includes some of the astrophysical issues which can be addressed, current modeling and analysis techniques, computational tools, the limitations imposed by currently available atomic data, and the validity of standard assumptions. I also discuss the future: challenges associated with future missions and goals for atomic data collection.

  3. What Information Is Necessary for Speech Categorization? Harnessing Variability in the Speech Signal by Integrating Cues Computed Relative to Expectations

    ERIC Educational Resources Information Center

    McMurray, Bob; Jongman, Allard

    2011-01-01

    Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the…

  4. A New Browser-based, Ontology-driven Tool for Generating Standardized, Deep Descriptions of Geoscience Models

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.; Kelbert, A.; Rudan, S.; Stoica, M.

    2016-12-01

    Standardized metadata for models is the key to reliable and greatly simplified coupling in model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System). This model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. While having this kind of standardized metadata for each model in a repository opens up a wide range of exciting possibilities, it is difficult to collect this information and a carefully conceived "data model" or schema is needed to store it. Automated harvesting and scraping methods can provide some useful information, but they often result in metadata that is inaccurate or incomplete, and this is not sufficient to enable the desired capabilities. In order to address this problem, we have developed a browser-based tool called the MCM Tool (Model Component Metadata) which runs on notebooks, tablets and smart phones. This tool was partially inspired by the TurboTax software, which greatly simplifies the necessary task of preparing tax documents. It allows a model developer or advanced user to provide a standardized, deep description of a computational geoscience model, including hydrologic models. Under the hood, the tool uses a new ontology for models built on the CSDMS Standard Names, expressed as a collection of RDF files (Resource Description Framework). This ontology is based on core concepts such as variables, objects, quantities, operations, processes and assumptions. The purpose of this talk is to present details of the new ontology and to then demonstrate the MCM Tool for several hydrologic models.

  5. Fair lineups are better than biased lineups and showups, but not because they increase underlying discriminability.

    PubMed

    Smith, Andrew M; Wells, Gary L; Lindsay, R C L; Penrod, Steven D

    2017-04-01

    Receiver Operating Characteristic (ROC) analysis has recently come in vogue for assessing the underlying discriminability and the applied utility of lineup procedures. Two primary assumptions underlie recommendations that ROC analysis be used to assess the applied utility of lineup procedures: (a) ROC analysis of lineups measures underlying discriminability, and (b) the procedure that produces superior underlying discriminability produces superior applied utility. These same assumptions underlie a recently derived diagnostic-feature detection theory, a theory of discriminability, intended to explain recent patterns observed in ROC comparisons of lineups. We demonstrate, however, that these assumptions are incorrect when ROC analysis is applied to lineups. We also demonstrate that a structural phenomenon of lineups, differential filler siphoning, and not the psychological phenomenon of diagnostic-feature detection, explains why lineups are superior to showups and why fair lineups are superior to biased lineups. In the process of our proofs, we show that computational simulations have assumed, unrealistically, that all witnesses share exactly the same decision criteria. When criterial variance is included in computational models, differential filler siphoning emerges. The result proves dissociation between ROC curves and underlying discriminability: Higher ROC curves for lineups than for showups and for fair than for biased lineups despite no increase in underlying discriminability. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Detecting Unsteady Blade Row Interaction in a Francis Turbine using a Phase-Lag Boundary Condition

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2013-11-01

    For CFD simulations in turbomachinery, methods are typically used to reduce the computational cost. For example, the standard periodic assumption reduces the underlying mesh to a single blade passage in axisymmetric applications. If the simulation includes only a single array of blades with an uniform inlet condition, this assumption is adequate. However, to compute the interaction between successive blade rows of differing periodicity in an unsteady simulation, the periodic assumption breaks down and may produce inaccurate results. As a viable alternative the phase-lag boundary condition assumes that the periodicity includes a temporal component which, if considered, allows for a single passage to be modeled per blade row irrespective of differing periodicity. Prominently used in compressible CFD codes for the analysis of gas turbines/compressors, the phase-lag boundary condition is adapted to analyze the interaction between the guide vanes and rotor blades in an incompressible simulation of the 1989 GAMM Workshop Francis turbine using OpenFOAM. The implementation is based on the ``direct-storage'' method proposed in 1977 by Erdos and Alzner. The phase-lag simulation is compared with available data from the GAMM workshop as well as a full-wheel simulation. Funding provided by DOE Award number: DE-EE0002667.

  7. Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.; hide

    2005-01-01

    Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.

  8. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  9. DEVELOPMENT AND VALIDATION OF A MULTIFIELD MODEL OF CHURN-TURBULENT GAS/LIQUID FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elena A. Tselishcheva; Steven P. Antal; Michael Z. Podowski

    The accuracy of numerical predictions for gas/liquid two-phase flows using Computational Multiphase Fluid Dynamics (CMFD) methods strongly depends on the formulation of models governing the interaction between the continuous liquid field and bubbles of different sizes. The purpose of this paper is to develop, test and validate a multifield model of adiabatic gas/liquid flows at intermediate gas concentrations (e.g., churn-turbulent flow regime), in which multiple-size bubbles are divided into a specified number of groups, each representing a prescribed range of sizes. The proposed modeling concept uses transport equations for the continuous liquid field and for each bubble field. The overallmore » model has been implemented in the NPHASE-CMFD computer code. The results of NPHASE-CMFD simulations have been validated against the experimental data from the TOPFLOW test facility. Also, a parametric analysis on the effect of various modeling assumptions has been performed.« less

  10. An integrated communications demand model

    NASA Astrophysics Data System (ADS)

    Doubleday, C. F.

    1980-11-01

    A computer model of communications demand is being developed to permit dynamic simulations of the long-term evolution of demand for communications media in the U.K. to be made under alternative assumptions about social, economic and technological trends in British Telecom's business environment. The context and objectives of the project and the potential uses of the model are reviewed, and four key concepts in the demand for communications media, around which the model is being structured are discussed: (1) the generation of communications demand; (2) substitution between media; (3) technological convergence; and (4) competition. Two outline perspectives on the model itself are given.

  11. What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations

    PubMed Central

    McMurray, Bob; Jongman, Allard

    2012-01-01

    Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important is the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context-dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2880 fricative productions (Jongman, Wayland & Wong, 2000) spanning many talker- and vowel-contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values, and manipulated the information in the training set to contrast 1) models based on a small number of invariant cues; 2) models using all cues without compensation, and 3) models in which cues underwent compensation for contextual factors. Compensation was modeled by Computing Cues Relative to Expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners, and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed. PMID:21417542

  12. Transport of Ice on the Surface of Iapetus

    NASA Astrophysics Data System (ADS)

    Galuba, Götz G.

    2014-11-01

    The global black-and-white dichotomy as well as the dark floors and rims of equatorial craters on the Saturnian moon Iapetus can be explained by ice migration driven by a thermal feedback [1]. All icy moons in the Jovian and Saturnian systems are - with the exception of Titan - airless bodies. Yet it is unique, how these two types of surface features on Iapetus look. A physical model of the processes of absorption, sublimation and deposition was developed and a computational model that simulates ice migration of volatiles under these circumstances derived. The model tessellates the surfaces of an airless body into triangles of equal size that can each have different surface properties. These properties evolve while the model simulates a long-term development. A rate network of net migration is calculated from sublimation and redeposition under the assumptions ofa. a slowly rotating bodyb. undisturbed ballistic molecular trajectoriesc. isotropic emissiond. Maxwellian speed distributione. high sticking coefficients of the surfaces.The assumptions (b.) to (e.) are equally valid for all bigger outer solar system icy moons (except Titan). The very first assumption however is not equally valid throughout the moons of the outer solar system. Callisto being in many regards similar to Iapetus still has a five times higher rotation rate. So global effects depending on slow rotation are more profound on Iapetus. The computer model is complemented by a model for local ice migration from craters.First results show, that the global timescale of albedo change in our model is of the same order of magnitude as in the supporting material to [1] with a tendency towards slightly faster 2 Gyr instead of ~2.4 Gyr) darkening compared to the "Model B". The time rate of local crater darkening rates lies between the global darkening rate and rate of the opposing brightening effect as estimated in [2] to (τ between 10 and 100 MYr).[1] Formation of Iapetus’ Extreme Albedo Dichotomy by Exogenically Triggered Thermal Ice Migration, John R. Spencer , Tilmann Denk, Science 22, Vol. 327, January 2010.[2] Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging T. Denk et al., Science 22, Vol. 327, January 2010.

  13. An assessment of the impact of FIA's default assumptions on the estimates of coarse woody debris volume and biomass

    Treesearch

    Vicente J. Monleon

    2009-01-01

    Currently, Forest Inventory and Analysis estimation procedures use Smalian's formula to compute coarse woody debris (CWD) volume and assume that logs lie horizontally on the ground. In this paper, the impact of those assumptions on volume and biomass estimates is assessed using 7 years of Oregon's Phase 2 data. Estimates of log volume computed using Smalian...

  14. Agent-Based Modeling in Molecular Systems Biology.

    PubMed

    Soheilypour, Mohammad; Mofrad, Mohammad R K

    2018-07-01

    Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.

  15. Characterization of physiological networks in sleep apnea patients using artificial neural networks for Granger causality computation

    NASA Astrophysics Data System (ADS)

    Cárdenas, Jhon; Orjuela-Cañón, Alvaro D.; Cerquera, Alexander; Ravelo, Antonio

    2017-11-01

    Different studies have used Transfer Entropy (TE) and Granger Causality (GC) computation to quantify interconnection between physiological systems. These methods have disadvantages in parametrization and availability in analytic formulas to evaluate the significance of the results. Other inconvenience is related with the assumptions in the distribution of the models generated from the data. In this document, the authors present a way to measure the causality that connect the Central Nervous System (CNS) and the Cardiac System (CS) in people diagnosed with obstructive sleep apnea syndrome (OSA) before and during treatment with continuous positive air pressure (CPAP). For this purpose, artificial neural networks were used to obtain models for GC computation, based on time series of normalized powers calculated from electrocardiography (EKG) and electroencephalography (EEG) signals recorded in polysomnography (PSG) studies.

  16. Comparing Experiment and Computation of Hypersonic Laminar Boundary Layers with Isolated Roughness

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Iyer, Prahladh S.; Mahesh, Krishnan; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Johansen, Craig T.

    2014-01-01

    Streamwise velocity profile behavior in a hypersonic laminar boundary layer in the presence of an isolated roughness element is presented for an edge Mach number of 8.2. Two different roughness element types are considered: a 2-mm tall, 4-mm diameter cylinder, and a 2-mm radius hemisphere. Measurements of the streamwise velocity behavior using nitric oxide (NO) planar laser-induced fluorescence (PLIF) molecular tagging velocimetry (MTV) have been performed on a 20-degree wedge model. The top surface of this model acts as a flat-plate and is oriented at 5 degrees with respect to the freestream flow. Computations using direct numerical simulation (DNS) of these flows have been performed and are compared to the measured velocity profiles. Particular attention is given to the characteristics of velocity profiles immediately upstream and downstream of the roughness elements. In these regions, the streamwise flow can experience strong deceleration or acceleration. An analysis in which experimentally measured MTV profile displacements are compared with DNS particle displacements is performed to determine if the assumption of constant velocity over the duration of the MTV measurement is valid. This assumption is typically made when reporting MTV-measured velocity profiles, and may result in significant errors when comparing MTV measurements to computations in regions with strong deceleration or acceleration. The DNS computations with the cylindrical roughness element presented in this paper were performed with and without air injection from a rectangular slot upstream of the cylinder. This was done to determine the extent to which gas seeding in the MTV measurements perturbs the boundary layer flowfield.

  17. A theory of the n-i-p silicon solar cell

    NASA Technical Reports Server (NTRS)

    Goradia, C.; Weinberg, I.; Baraona, C.

    1981-01-01

    A computer model has been developed, based on an analytical theory of the high base resistivity BSF n(+)(pi)p(+) or p(+)(nu)n(+) silicon solar cell. The model makes very few assumptions and accounts for nonuniform optical generation, generation and recombination in the junction space charge region, and bandgap narrowing in the heavily doped regions. The paper presents calculated results based on this model and compares them to available experimental data. Also discussed is radiation damage in high base resistivity n(+)(pi)p(+) space solar cells.

  18. Effective description of general extensions of the Standard Model: the complete tree-level dictionary

    NASA Astrophysics Data System (ADS)

    de Blas, J.; Criado, J. C.; Pérez-Victoria, M.; Santiago, J.

    2018-03-01

    We compute all the tree-level contributions to the Wilson coefficients of the dimension-six Standard-Model effective theory in ultraviolet completions with general scalar, spinor and vector field content and arbitrary interactions. No assumption about the renormalizability of the high-energy theory is made. This provides a complete ultraviolet/infrared dictionary at the classical level, which can be used to study the low-energy implications of any model of interest, and also to look for explicit completions consistent with low-energy data.

  19. Evaluating quantitative and conceptual models of speech production: how does SLAM fare?

    PubMed

    Walker, Grant M; Hickok, Gregory

    2016-04-01

    In a previous publication, we presented a new computational model called SLAM (Walker & Hickok, Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0903 ), based on the hierarchical state feedback control (HSFC) theory (Hickok Nature Reviews Neuroscience, 13(2), 135-145, 2012). In his commentary, Goldrick (Psychonomic Bulletin & Review doi: 10.3758/s13423-015-0946-9 ) claims that SLAM does not represent a theoretical advancement, because it cannot be distinguished from an alternative lexical + postlexical (LPL) theory proposed by Goldrick and Rapp (Cognition, 102(2), 219-260, 2007). First, we point out that SLAM implements a portion of a conceptual model (HSFC) that encompasses LPL. Second, we show that SLAM accounts for a lexical bias present in sound-related errors that LPL does not explain. Third, we show that SLAM's explanatory advantage is not a result of approximating the architectural or computational assumptions of LPL, since an implemented version of LPL fails to provide the same fit improvements as SLAM. Finally, we show that incorporating a mechanism that violates some core theoretical assumptions of LPL-making it more like SLAM in terms of interactivity-allows the model to capture some of the same effects as SLAM. SLAM therefore provides new modeling constraints regarding interactions among processing levels, while also elaborating on the structure of the phonological level. We view this as evidence that an integration of psycholinguistic, neuroscience, and motor control approaches to speech production is feasible and may lead to substantial new insights.

  20. A comparison of three methods for estimating the requirements for medical specialists: the case of otolaryngologists.

    PubMed Central

    Anderson, G F; Han, K C; Miller, R H; Johns, M E

    1997-01-01

    OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613

  1. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  2. Cost Benefit Analysis Modeling Tool for Electric vs. ICE Airport Ground Support Equipment – Development and Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Francfort; Kevin Morrow; Dimitri Hochard

    2007-02-01

    This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.

  3. The generalized van der Waals theory of pure fluids and mixtures: Annual report for September 1985 to November 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandler, S.I.

    1986-01-01

    The objective of the work is to use the generalized van der Waals theory, as derived earlier (''The Generalized van der Waals Partition Function I. Basic Theory'' by S.I. Sandler, Fluid Phase Equilibria 19, 233 (1985)) to: (1) understand the molecular level assumptions inherent in current thermodynamic models; (2) use theory and computer simulation studies to test these assumptions; and (3) develop new, improved thermodynamic models based on better molecular level assumptions. From such a fundamental study, thermodynamic models will be developed that will be applicable to mixtures of molecules of widely different size and functionality, as occurs in themore » processing of heavy oils, coal liquids and other synthetic fuels. An important aspect of our work is to reduce our fundamental theoretical developments to engineering practice through extensive testing and evaluation with experimental data on real mixtures. During the first year of this project important progress was made in the areas specified in the original proposal, as well as several subsidiary areas identified as the work progressed. Some of this work has been written up and submitted for publication. Manuscripts acknowledging DOE support, together with a very brief description, are listed herein.« less

  4. Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, Part II: proofs of results.

    PubMed

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M

    2010-03-03

    In this companion article to "Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content" [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

  5. A Markov chain model for reliability growth and decay

    NASA Technical Reports Server (NTRS)

    Siegrist, K.

    1982-01-01

    A mathematical model is developed to describe a complex system undergoing a sequence of trials in which there is interaction between the internal states of the system and the outcomes of the trials. For example, the model might describe a system undergoing testing that is redesigned after each failure. The basic assumptions for the model are that the state of the system after a trial depends probabilistically only on the state before the trial and on the outcome of the trial and that the outcome of a trial depends probabilistically only on the state of the system before the trial. It is shown that under these basic assumptions, the successive states form a Markov chain and the successive states and outcomes jointly form a Markov chain. General results are obtained for the transition probabilities, steady-state distributions, etc. A special case studied in detail describes a system that has two possible state ('repaired' and 'unrepaired') undergoing trials that have three possible outcomes ('inherent failure', 'assignable-cause' 'failure' and 'success'). For this model, the reliability function is computed explicitly and an optimal repair policy is obtained.

  6. Model-based Utility Functions

    NASA Astrophysics Data System (ADS)

    Hibbard, Bill

    2012-05-01

    Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.

  7. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.

  8. Investigating Darcy-scale assumptions by means of a multiphysics algorithm

    NASA Astrophysics Data System (ADS)

    Tomin, Pavel; Lunati, Ivan

    2016-09-01

    Multiphysics (or hybrid) algorithms, which couple Darcy and pore-scale descriptions of flow through porous media in a single numerical framework, are usually employed to decrease the computational cost of full pore-scale simulations or to increase the accuracy of pure Darcy-scale simulations when a simple macroscopic description breaks down. Despite the massive increase in available computational power, the application of these techniques remains limited to core-size problems and upscaling remains crucial for practical large-scale applications. In this context, the Hybrid Multiscale Finite Volume (HMsFV) method, which constructs the macroscopic (Darcy-scale) problem directly by numerical averaging of pore-scale flow, offers not only a flexible framework to efficiently deal with multiphysics problems, but also a tool to investigate the assumptions used to derive macroscopic models and to better understand the relationship between pore-scale quantities and the corresponding macroscale variables. Indeed, by direct comparison of the multiphysics solution with a reference pore-scale simulation, we can assess the validity of the closure assumptions inherent to the multiphysics algorithm and infer the consequences for macroscopic models at the Darcy scale. We show that the definition of the scale ratio based on the geometric properties of the porous medium is well justified only for single-phase flow, whereas in case of unstable multiphase flow the nonlinear interplay between different forces creates complex fluid patterns characterized by new spatial scales, which emerge dynamically and weaken the scale-separation assumption. In general, the multiphysics solution proves very robust even when the characteristic size of the fluid-distribution patterns is comparable with the observation length, provided that all relevant physical processes affecting the fluid distribution are considered. This suggests that macroscopic constitutive relationships (e.g., the relative permeability) should account for the fact that they depend not only on the saturation but also on the actual characteristics of the fluid distribution.

  9. Brain shift computation using a fully nonlinear biomechanical model.

    PubMed

    Wittek, Adam; Kikinis, Ron; Warfield, Simon K; Miller, Karol

    2005-01-01

    In the present study, fully nonlinear (i.e. accounting for both geometric and material nonlinearities) patient specific finite element brain model was applied to predict deformation field within the brain during the craniotomy-induced brain shift. Deformation of brain surface was used as displacement boundary conditions. Application of the computed deformation field to align (i.e. register) the preoperative images with the intraoperative ones indicated that the model very accurately predicts the displacements of gravity centers of the lateral ventricles and tumor even for very limited information about the brain surface deformation. These results are sufficient to suggest that nonlinear biomechanical models can be regarded as one possible way of complementing medical image processing techniques when conducting nonrigid registration. Important advantage of such models over the linear ones is that they do not require unrealistic assumptions that brain deformations are infinitesimally small and brain tissue stress-strain relationship is linear.

  10. Application of CEDA and ASPIC computer packages to the hairtail ( Trichiurus japonicus) fishery in the East China Sea

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Liu, Qun

    2013-01-01

    Surplus-production models are widely used in fish stock assessment and fisheries management due to their simplicity and lower data demands than age-structured models such as Virtual Population Analysis. The CEDA (catch-effort data analysis) and ASPIC (a surplus-production model incorporating covariates) computer packages are data-fitting or parameter estimation tools that have been developed to analyze catch-and-effort data using non-equilibrium surplus production models. We applied CEDA and ASPIC to the hairtail ( Trichiurus japonicus) fishery in the East China Sea. Both packages produced robust results and yielded similar estimates. In CEDA, the Schaefer surplus production model with log-normal error assumption produced results close to those of ASPIC. CEDA is sensitive to the choice of initial proportion, while ASPIC is not. However, CEDA produced higher R 2 values than ASPIC.

  11. Single-Cell Genomics: Approaches and Utility in Immunology.

    PubMed

    Neu, Karlynn E; Tang, Qingming; Wilson, Patrick C; Khan, Aly A

    2017-02-01

    Single-cell genomics offers powerful tools for studying immune cells, which make it possible to observe rare and intermediate cell states that cannot be resolved at the population level. Advances in computer science and single-cell sequencing technology have created a data-driven revolution in immunology. The challenge for immunologists is to harness computing and turn an avalanche of quantitative data into meaningful discovery of immunological principles, predictive models, and strategies for therapeutics. Here, we review the current literature on computational analysis of single-cell RNA-sequencing data and discuss underlying assumptions, methods, and applications in immunology, and highlight important directions for future research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Improving estimates of subsurface gas transport in unsaturated fractured media using experimental Xe diffusion data and numerical methods

    NASA Astrophysics Data System (ADS)

    Ortiz, J. P.; Ortega, A. D.; Harp, D. R.; Boukhalfa, H.; Stauffer, P. H.

    2017-12-01

    Gas transport in unsaturated fractured media plays an important role in a variety of applications, including detection of underground nuclear explosions, transport from volatile contaminant plumes, shallow CO2 leakage from carbon sequestration sites, and methane leaks from hydraulic fracturing operations. Gas breakthrough times are highly sensitive to uncertainties associated with a variety of hydrogeologic parameters, including: rock type, fracture aperture, matrix permeability, porosity, and saturation. Furthermore, a couple simplifying assumptions are typically employed when representing fracture flow and transport. Aqueous phase transport is typically considered insignificant compared to gas phase transport in unsaturated fracture flow regimes, and an assumption of instantaneous dissolution/volatilization of radionuclide gas is commonly used to reduce computational expense. We conduct this research using a twofold approach that combines laboratory gas experimentation and numerical modeling to verify and refine these simplifying assumptions in our current models of gas transport. Using a gas diffusion cell, we are able to measure air pressure transmission through fractured tuff core samples while also measuring Xe gas breakthrough measured using a mass spectrometer. We can thus create synthetic barometric fluctuations akin to those observed in field tests and measure the associated gas flow through the fracture and matrix pore space for varying degrees of fluid saturation. We then attempt to reproduce the experimental results using numerical models in PLFOTRAN and FEHM codes to better understand the importance of different parameters and assumptions on gas transport. Our numerical approaches represent both single-phase gas flow with immobile water, as well as full multi-phase transport in order to test the validity of assuming immobile pore water. Our approaches also include the ability to simulate the reaction equilibrium kinetics of dissolution/volatilization in order to identify when the assumption of instantaneous equilibrium is reasonable. These efforts will aid us in our application of such models to larger, field-scale tests and improve our ability to predict gas breakthrough times.

  13. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  14. Hydroacoustic forcing function modeling using DNS database

    NASA Technical Reports Server (NTRS)

    Zawadzki, I.; Gershfield, J. L.; Na, Y.; Wang, M.

    1996-01-01

    A wall pressure frequency spectrum model (Blake 1971 ) has been evaluated using databases from Direct Numerical Simulations (DNS) of a turbulent boundary layer (Na & Moin 1996). Good agreement is found for moderate to strong adverse pressure gradient flows in the absence of separation. In the separated flow region, the model underpredicts the directly calculated spectra by an order of magnitude. The discrepancy is attributed to the violation of the model assumptions in that part of the flow domain. DNS computed coherence length scales and the normalized wall pressure cross-spectra are compared with experimental data. The DNS results are consistent with experimental observations.

  15. Review of Integrated Noise Model (INM) Equations and Processes

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph

    2003-01-01

    The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).

  16. Graphical tools for network meta-analysis in STATA.

    PubMed

    Chaimani, Anna; Higgins, Julian P T; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia

    2013-01-01

    Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results.

  17. Graphical Tools for Network Meta-Analysis in STATA

    PubMed Central

    Chaimani, Anna; Higgins, Julian P. T.; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia

    2013-01-01

    Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results. PMID:24098547

  18. Cost-effective computational method for radiation heat transfer in semi-crystalline polymers

    NASA Astrophysics Data System (ADS)

    Boztepe, Sinan; Gilblas, Rémi; de Almeida, Olivier; Le Maoult, Yannick; Schmidt, Fabrice

    2018-05-01

    This paper introduces a cost-effective numerical model for infrared (IR) heating of semi-crystalline polymers. For the numerical and experimental studies presented here semi-crystalline polyethylene (PE) was used. The optical properties of PE were experimentally analyzed under varying temperature and the obtained results were used as input in the numerical studies. The model was built based on optically homogeneous medium assumption whereas the strong variation in the thermo-optical properties of semi-crystalline PE under heating was taken into account. Thus, the change in the amount radiative energy absorbed by the PE medium was introduced in the model induced by its temperature-dependent thermo-optical properties. The computational study was carried out considering an iterative closed-loop computation, where the absorbed radiation was computed using an in-house developed radiation heat transfer algorithm -RAYHEAT- and the computed results was transferred into the commercial software -COMSOL Multiphysics- for solving transient heat transfer problem to predict temperature field. The predicted temperature field was used to iterate the thermo-optical properties of PE that varies under heating. In order to analyze the accuracy of the numerical model experimental analyses were carried out performing IR-thermographic measurements during the heating of the PE plate. The applicability of the model in terms of computational cost, number of numerical input and accuracy was highlighted.

  19. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    PubMed

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  20. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  1. Uncertainty Quantification and Certification Prediction of Low-Boom Supersonic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.

    2014-01-01

    The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.

  2. Comparative study of solar optics for paraboloidal concentrators

    NASA Technical Reports Server (NTRS)

    Wen, L.; Poon, P.; Carley, W.; Huang, L.

    1979-01-01

    Different analytical methods for computing the flux distribution on the focal plane of a paraboloidal solar concentrator are reviewed. An analytical solution in algebraic form is also derived for an idealized model. The effects resulting from using different assumptions in the definition of optical parameters used in these methodologies are compared and discussed in detail. These parameters include solar irradiance distribution (limb darkening and circumsolar), reflector surface specular spreading, surface slope error, and concentrator pointing inaccuracy. The type of computational method selected for use depends on the maturity of the design and the data available at the time the analysis is made.

  3. Evaluating the Performance of Single and Double Moment Microphysics Schemes During a Synoptic-Scale Snowfall Event

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.

    2011-01-01

    Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models for winter weather and identify best practices present within these various schemes.

  4. In Pursuit of Improving Airburst and Ground Damage Predictions: Recent Advances in Multi-Body Aerodynamic Testing and Computational Tools Validation

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian

    2017-01-01

    An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.

  5. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  6. Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame

    NASA Astrophysics Data System (ADS)

    Lee, Myoungkyu; Oliver, Todd; Moser, Robert

    2017-11-01

    Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.

  7. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  8. Combustion of hydrogen injected into a supersonic airstream (the SHIP computer program)

    NASA Technical Reports Server (NTRS)

    Markatos, N. C.; Spalding, D. B.; Tatchell, D. G.

    1977-01-01

    The mathematical and physical basis of the SHIP computer program which embodies a finite-difference, implicit numerical procedure for the computation of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction is described. The physical hypotheses built into the program include: a two-equation turbulence model, and a chemical equilibrium model for the hydrogen-oxygen reaction. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. The computer time required for a given case is approximately 1 minute on a CDC 7600 machine. A discussion of the assumption of parabolic flow in the injection region is given which suggests that improvement in calculation in this region could be obtained by use of the partially parabolic procedure of Pratap and Spalding. It is concluded that the technique described herein provides the basis for an efficient and reliable means for predicting the effects of hydrogen injection into supersonic airstreams and of its subsequent combustion.

  9. Computational methods for diffusion-influenced biochemical reactions.

    PubMed

    Dobrzynski, Maciej; Rodríguez, Jordi Vidal; Kaandorp, Jaap A; Blom, Joke G

    2007-08-01

    We compare stochastic computational methods accounting for space and discrete nature of reactants in biochemical systems. Implementations based on Brownian dynamics (BD) and the reaction-diffusion master equation are applied to a simplified gene expression model and to a signal transduction pathway in Escherichia coli. In the regime where the number of molecules is small and reactions are diffusion-limited predicted fluctuations in the product number vary between the methods, while the average is the same. Computational approaches at the level of the reaction-diffusion master equation compute the same fluctuations as the reference result obtained from the particle-based method if the size of the sub-volumes is comparable to the diameter of reactants. Using numerical simulations of reversible binding of a pair of molecules we argue that the disagreement in predicted fluctuations is due to different modeling of inter-arrival times between reaction events. Simulations for a more complex biological study show that the different approaches lead to different results due to modeling issues. Finally, we present the physical assumptions behind the mesoscopic models for the reaction-diffusion systems. Input files for the simulations and the source code of GMP can be found under the following address: http://www.cwi.nl/projects/sic/bioinformatics2007/

  10. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    NASA Astrophysics Data System (ADS)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  11. A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials

    PubMed Central

    Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro

    2016-01-01

    Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916

  12. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  13. Autonomous Driver Based on an Intelligent System of Decision-Making.

    PubMed

    Czubenko, Michał; Kowalczuk, Zdzisław; Ordys, Andrew

    The paper presents and discusses a system ( xDriver ) which uses an Intelligent System of Decision-making (ISD) for the task of car driving. The principal subject is the implementation, simulation and testing of the ISD system described earlier in our publications (Kowalczuk and Czubenko in artificial intelligence and soft computing lecture notes in computer science, lecture notes in artificial intelligence, Springer, Berlin, 2010, 2010, In Int J Appl Math Comput Sci 21(4):621-635, 2011, In Pomiary Autom Robot 2(17):60-5, 2013) for the task of autonomous driving. The design of the whole ISD system is a result of a thorough modelling of human psychology based on an extensive literature study. Concepts somehow similar to the ISD system can be found in the literature (Muhlestein in Cognit Comput 5(1):99-105, 2012; Wiggins in Cognit Comput 4(3):306-319, 2012), but there are no reports of a system which would model the human psychology for the purpose of autonomously driving a car. The paper describes assumptions for simulation, the set of needs and reactions (characterizing the ISD system), the road model and the vehicle model, as well as presents some results of simulation. It proves that the xDriver system may behave on the road as a very inexperienced driver.

  14. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  15. Modeling of anomalous electron mobility in Hall thrusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koo, Justin W.; Boyd, Iain D.

    Accurate modeling of the anomalous electron mobility is absolutely critical for successful simulation of Hall thrusters. In this work, existing computational models for the anomalous electron mobility are used to simulate the UM/AFRL P5 Hall thruster (a 5 kW laboratory model) in a two-dimensional axisymmetric hybrid particle-in-cell Monte Carlo collision code. Comparison to experimental results indicates that, while these computational models can be tuned to reproduce the correct thrust or discharge current, it is very difficult to match all integrated performance parameters (thrust, power, discharge current, etc.) simultaneously. Furthermore, multiple configurations of these computational models can produce reasonable integrated performancemore » parameters. A semiempirical electron mobility profile is constructed from a combination of internal experimental data and modeling assumptions. This semiempirical electron mobility profile is used in the code and results in more accurate simulation of both the integrated performance parameters and the mean potential profile of the thruster. Results indicate that the anomalous electron mobility, while absolutely necessary in the near-field region, provides a substantially smaller contribution to the total electron mobility in the high Hall current region near the thruster exit plane.« less

  16. Modeling spatiotemporal covariance for magnetoencephalography or electroencephalography source analysis.

    PubMed

    Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M

    2007-01-01

    We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.

  17. Protein Modelling: What Happened to the “Protein Structure Gap”?

    PubMed Central

    Schwede, Torsten

    2013-01-01

    Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing vision in structural biology as it holds the promise to bypass part of the laborious process of experimental structure solution. Over the last two decades, a paradigm shift has occurred: starting from a situation where the “structure knowledge gap” between the huge number of protein sequences and small number of known structures has hampered the widespread use of structure-based approaches in life science research, today some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. With the scientific focus of interest moving towards larger macromolecular complexes and dynamic networks of interactions, the integration of computational modeling methods with low-resolution experimental techniques allows studying large and complex molecular machines. Computational modeling and prediction techniques are still facing a number of challenges which hamper the more widespread use by the non-expert scientist. For example, it is often difficult to convey the underlying assumptions of a computational technique, as well as the expected accuracy and structural variability of a specific model. However, these aspects are crucial to understand the limitations of a model, and to decide which interpretations and conclusions can be supported. PMID:24010712

  18. Evaluation of Marine Corps Manpower Computer Simulation Model

    DTIC Science & Technology

    2016-12-01

    merit- based promotion selection that is in conjunction with the “up or out” manpower system. To ensure mission accomplishment within M&RA, it is...historical data the MSM pulls from an online Oracle database. Two types of data base pulls occur here: acquiring historical data of manpower pyramid...is based off of the assumption that the historical manpower progression is constant, and therefore is controllable. This unfortunately does not marry

  19. Techniques for the computation in demographic projections of health manpower.

    PubMed

    Horbach, L

    1979-01-01

    Some basic principles and algorithms are presented which can be used for projective calculations of medical staff on the basis of demographic data. The effects of modifications of the input data such as by health policy measures concerning training capacity, can be demonstrated by repeated calculations with assumptions. Such models give a variety of results and may highlight the probable future balance between health manpower supply and requirements.

  20. A financial planning model for estimating hospital debt capacity.

    PubMed Central

    Hopkins, D S; Heath, D; Levin, P J

    1982-01-01

    A computer-based financial planning model was formulated to measure the impact of a major capital improvement project on the fiscal health of Stanford University Hospital. The model had to be responsive to many variables and easy to use, so as to allow for the testing of numerous alternatives. Special efforts were made to identify the key variables that needed to be presented in the model and to include all known links between capital investment, debt, and hospital operating expenses. Growth in the number of patient days of care was singled out as a major source of uncertainty that would have profound effects on the hospital's finances. Therefore this variable was subjected to special scrutiny in terms of efforts to gauge expected demographic trends and market forces. In addition, alternative base runs of the model were made under three distinct patient-demand assumptions. Use of the model enabled planners at the Stanford University Hospital (a) to determine that a proposed modernization plan was financially feasible under a reasonable (that is, not unduly optimistic) set of assumptions and (b) to examine the major sources of risk. Other than patient demand, these sources were found to be gross revenues per patient, operating costs, and future limitations on government reimbursement programs. When the likely financial consequences of these risks were estimated, both separately and in combination, it was determined that even if two or more assumptions took a somewhat more negative turn than was expected, the hospital would be able to offset adverse consequences by a relatively minor reduction in operating costs. PMID:7111658

  1. Simulating Microbial Community Patterning Using Biocellion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Seung-Hwa; Kahan, Simon H.; Momeni, Babak

    2014-04-17

    Mathematical modeling and computer simulation are important tools for understanding complex interactions between cells and their biotic and abiotic environment: similarities and differences between modeled and observed behavior provide the basis for hypothesis forma- tion. Momeni et al. [5] investigated pattern formation in communities of yeast strains engaging in different types of ecological interactions, comparing the predictions of mathematical modeling and simulation to actual patterns observed in wet-lab experiments. However, simu- lations of millions of cells in a three-dimensional community are ex- tremely time-consuming. One simulation run in MATLAB may take a week or longer, inhibiting exploration of the vastmore » space of parameter combinations and assumptions. Improving the speed, scale, and accu- racy of such simulations facilitates hypothesis formation and expedites discovery. Biocellion is a high performance software framework for ac- celerating discrete agent-based simulation of biological systems with millions to trillions of cells. Simulations of comparable scale and accu- racy to those taking a week of computer time using MATLAB require just hours using Biocellion on a multicore workstation. Biocellion fur- ther accelerates large scale, high resolution simulations using cluster computers by partitioning the work to run on multiple compute nodes. Biocellion targets computational biologists who have mathematical modeling backgrounds and basic C++ programming skills. This chap- ter describes the necessary steps to adapt the original Momeni et al.'s model to the Biocellion framework as a case study.« less

  2. Impaired associative learning in schizophrenia: behavioral and computational studies

    PubMed Central

    Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.

    2008-01-01

    Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486

  3. Imaging System Model Crammed Into A 32K Microcomputer

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1986-12-01

    An imaging system model, based upon linear systems theory, has been developed for a microcomputer with less than 32K of free random access memory (RAM). The model includes diffraction effects of the optics, aberrations in the optics, and atmospheric propagation transfer functions. Variables include pupil geometry, magnitude and character of the aberrations, and strength of atmospheric turbulence ("seeing"). Both coherent and incoherent image formation can be evaluated. The techniques employed for crowding the model into a very small computer will be discussed in detail. Simplifying assumptions for the diffraction and aberration phenomena will be shown along with practical considerations in modeling the optical system. Particular emphasis is placed on avoiding inaccuracies in modeling the pupil and the associated optical transfer function knowing limits on spatial frequency content and resolution. Memory and runtime constraints are analyzed stressing the efficient use of assembly language Fourier transform routines, disk input/output, and graphic displays. The compromises between computer time, limited RAM, and scientific accuracy will be given with techniques for balancing these parameters for individual needs.

  4. Critical Computer Literacy: Computers in First-Year Composition as Topic and Environment.

    ERIC Educational Resources Information Center

    Duffelmeyer, Barbara Blakely

    2000-01-01

    Addresses how first-year students understand the influence of computers by cultural assumptions about technology. Presents three meaning perspectives on technology that students expressed based on formative experiences they have had with it. Discusses implications for how computers and composition scholars incorporate computer technology into…

  5. Isolating Curvature Effects in Computing Wall-Bounded Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2001-01-01

    The flow over the zero-pressure-gradient So-Mellor convex curved wall is simulated using the Navier-Stokes equations. An inviscid effective outer wall shape, undocumented in the experiment, is obtained by using an adjoint optimization method with the desired pressure distribution on the inner wall as the cost function. Using this wall shape with a Navier-Stokes method, the abilities of various turbulence models to simulate the effects of curvature without the complicating factor of streamwise pressure gradient can be evaluated. The one-equation Spalart-Allmaras turbulence model overpredicts eddy viscosity, and its boundary layer profiles are too full. A curvature-corrected version of this model improves results, which are sensitive to the choice of a particular constant. An explicit algebraic stress model does a reasonable job predicting this flow field. However, results can be slightly improved by modifying the assumption on anisotropy equilibrium in the model's derivation. The resulting curvature-corrected explicit algebraic stress model possesses no heuristic functions or additional constants. It lowers slightly the computed skin friction coefficient and the turbulent stress levels for this case (in better agreement with experiment), but the effect on computed velocity profiles is very small.

  6. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  7. Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part II: Proofs of Results*

    PubMed Central

    Orellana, Liliana; Rotnitzky, Andrea; Robins, James M.

    2010-01-01

    In this companion article to “Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content” [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption. PMID:20405047

  8. Non-driving intersegmental knee moments in cycling computed using a model that includes three-dimensional kinematics of the shank/foot and the effect of simplifying assumptions.

    PubMed

    Gregersen, Colin S; Hull, M L

    2003-06-01

    Assessing the importance of non-driving intersegmental knee moments (i.e. varus/valgus and internal/external axial moments) on over-use knee injuries in cycling requires the use of a three-dimensional (3-D) model to compute these loads. The objectives of this study were: (1) to develop a complete, 3-D model of the lower limb to calculate the 3-D knee loads during pedaling for a sample of the competitive cycling population, and (2) to examine the effects of simplifying assumptions on the calculations of the non-driving knee moments. The non-driving knee moments were computed using a complete 3-D model that allowed three rotational degrees of freedom at the knee joint, included the 3-D inertial loads of the shank/foot, and computed knee loads in a shank-fixed coordinate system. All input data, which included the 3-D segment kinematics and the six pedal load components, were collected from the right limb of 15 competitive cyclists while pedaling at 225 W and 90 rpm. On average, the peak varus and internal axial moments of 7.8 and 1.5 N m respectively occurred during the power stroke whereas the peak valgus and external axial moments of 8.1 and 2.5 N m respectively occurred during the recovery stroke. However, the non-driving knee moments were highly variable between subjects; the coefficients of variability in the peak values ranged from 38.7% to 72.6%. When it was assumed that the inertial loads of the shank/foot for motion out of the sagittal plane were zero, the root-mean-squared difference (RMSD) in the non-driving knee moments relative to those for the complete model was 12% of the peak varus/valgus moment and 25% of the peak axial moment. When it was also assumed that the knee joint was revolute with the flexion/extension axis perpendicular to the sagittal plane, the RMSD increased to 24% of the peak varus/valgus moment and 204% of the peak axial moment. Thus, the 3-D orientation of the shank segment has a major affect on the computation of the non-driving knee moments, while the inertial contributions to these loads for motions out of the sagittal plane are less important.

  9. Linking normative models of natural tasks to descriptive models of neural response.

    PubMed

    Jaini, Priyank; Burge, Johannes

    2017-10-01

    Understanding how nervous systems exploit task-relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given natural task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy maximization analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Then, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Next, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the generalized quadratic model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.

  10. Fractal analysis of bone structure with applications to osteoporosis and microgravity effects

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; LeBlanc, Adrian; Shackelford, Linda; Swarnakar, Vivek; Krishnamurthy, Ram; Hausman, E.; Lin, Chin-Shoou

    1995-05-01

    We characterize the trabecular structure with the aid of fractal dimension. We use alternating sequential filters (ASF) to generate a nonlinear pyramid for fractal dimension computations. We do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of our scheme is the rudimentary definition of self-similarity. This allows us the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, we have shown that the ASF methods outperform other existing methods for fractal dimension estimation. We have shown that the fractal dimension remains the same when computed with both the x-ray images and the MRI images of the patella. We have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, we have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, we have shown that the subject's prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.

  11. Fractal analysis of bone structure with applications to osteoporosis and microgravity effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, R.S.; Swarnarkar, V.; Krishnamurthy, R.

    1995-12-31

    The authors characterize the trabecular structure with the aid of fractal dimension. The authors use Alternating Sequential filters to generate a nonlinear pyramid for fractal dimension computations. The authors do not make any assumptions of the statistical distributions of the underlying fractal bone structure. The only assumption of the scheme is the rudimentary definition of self similarity. This allows them the freedom of not being constrained by statistical estimation schemes. With mathematical simulations, the authors have shown that the ASF methods outperform other existing methods for fractal dimension estimation. They have shown that the fractal dimension remains the same whenmore » computed with both the X-Ray images and the MRI images of the patella. They have shown that the fractal dimension of osteoporotic subjects is lower than that of the normal subjects. In animal models, the authors have shown that the fractal dimension of osteoporotic rats was lower than that of the normal rats. In a 17 week bedrest study, they have shown that the subject`s prebedrest fractal dimension is higher than that of the postbedrest fractal dimension.« less

  12. Dynamic Network-Based Epistasis Analysis: Boolean Examples

    PubMed Central

    Azpeitia, Eugenio; Benítez, Mariana; Padilla-Longoria, Pablo; Espinosa-Soto, Carlos; Alvarez-Buylla, Elena R.

    2011-01-01

    In this article we focus on how the hierarchical and single-path assumptions of epistasis analysis can bias the inference of gene regulatory networks. Here we emphasize the critical importance of dynamic analyses, and specifically illustrate the use of Boolean network models. Epistasis in a broad sense refers to gene interactions, however, as originally proposed by Bateson, epistasis is defined as the blocking of a particular allelic effect due to the effect of another allele at a different locus (herein, classical epistasis). Classical epistasis analysis has proven powerful and useful, allowing researchers to infer and assign directionality to gene interactions. As larger data sets are becoming available, the analysis of classical epistasis is being complemented with computer science tools and system biology approaches. We show that when the hierarchical and single-path assumptions are not met in classical epistasis analysis, the access to relevant information and the correct inference of gene interaction topologies is hindered, and it becomes necessary to consider the temporal dynamics of gene interactions. The use of dynamical networks can overcome these limitations. We particularly focus on the use of Boolean networks that, like classical epistasis analysis, relies on logical formalisms, and hence can complement classical epistasis analysis and relax its assumptions. We develop a couple of theoretical examples and analyze them from a dynamic Boolean network model perspective. Boolean networks could help to guide additional experiments and discern among alternative regulatory schemes that would be impossible or difficult to infer without the elimination of these assumption from the classical epistasis analysis. We also use examples from the literature to show how a Boolean network-based approach has resolved ambiguities and guided epistasis analysis. Our article complements previous accounts, not only by focusing on the implications of the hierarchical and single-path assumption, but also by demonstrating the importance of considering temporal dynamics, and specifically introducing the usefulness of Boolean network models and also reviewing some key properties of network approaches. PMID:22645556

  13. Towards Full-Waveform Ambient Noise Inversion

    NASA Astrophysics Data System (ADS)

    Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.

    2016-12-01

    Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.

  14. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  15. Epidemic modeling in complex realities.

    PubMed

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  16. Single-ion hydration thermodynamics from clusters to bulk solutions: Recent insights from molecular modeling

    DOE PAGES

    Vlcek, Lukas; Chialvo, Ariel A.

    2016-01-03

    The importance of single-ion hydration thermodynamic properties for understanding the driving forces of aqueous electrolyte processes, along with the impossibility of their direct experimental measurement, have prompted a large number of experimental, theoretical, and computational studies aimed at separating the cation and anion contributions. Here we provide an overview of historical approaches based on extrathermodynamic assumptions and more recent computational studies of single-ion hydration in order to evaluate the approximations involved in these methods, quantify their accuracy, reliability, and limitations in the light of the latest developments. Finally, we also offer new insights into the factors that influence the accuracymore » of ion–water interaction models and our views on possible ways to fill this substantial knowledge gap in aqueous physical chemistry.« less

  17. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  18. Computer simulation of the heavy-duty turbo-compounded diesel cycle for studies of engine efficiency and performance

    NASA Technical Reports Server (NTRS)

    Assanis, D. N.; Ekchian, J. A.; Heywood, J. B.; Replogle, K. K.

    1984-01-01

    Reductions in heat loss at appropriate points in the diesel engine which result in substantially increased exhaust enthalpy were shown. The concepts for this increased enthalpy are the turbocharged, turbocompounded diesel engine cycle. A computer simulation of the heavy duty turbocharged turbo-compounded diesel engine system was undertaken. This allows the definition of the tradeoffs which are associated with the introduction of ceramic materials in various parts of the total engine system, and the study of system optimization. The basic assumptions and the mathematical relationships used in the simulation of the model engine are described.

  19. Evaluation of COBRA III-C and SABRE-I (wire wrap version) computational results by comparison with steady-state data from a 19-pin internally guard heated sodium cooled bundle with a six-channel central blockage (THORS bundle 3C). [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dearing, J F; Rose, S D; Nelson, W R

    The predicted computational results of two well-known sub-channel analysis codes, COBRA-III-C and SABRE-I (wire wrap version), have been evaluated by comparison with steady state temperature data from the THORS Facility at ORNL. Both codes give good predictions of transverse and axial temperatures when compared with wire wrap thermocouple data. The crossflow velocity profiles predicted by these codes are similar which is encouraging since the wire wrap models are based on different assumptions.

  20. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Pirvics, J.

    1980-01-01

    The models and associated mathematics used within the SPHERBEAN computer program for prediction of the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings are presented. The analysis allows six degrees of freedom for each roller and three for each half of an optionally split cage. Roller skew, free lubricant, inertial loads, appropriate elastic and friction forces, and flexible outer ring are considered. Roller quasidynamic equilibrium is calculated for a bearing with up to 30 rollers per row, and distinct roller and flange geometries are specifiable. The user is referred to the material contained here for formulation assumptions and algorithm detail.

  1. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  2. The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Kochoska, Angela; Prša, Andrej; Horvat, Martin

    2018-01-01

    Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.

  3. Dynamics of an HIV-1 infection model with cell mediated immunity

    NASA Astrophysics Data System (ADS)

    Yu, Pei; Huang, Jianing; Jiang, Jiao

    2014-10-01

    In this paper, we study the dynamics of an improved mathematical model on HIV-1 virus with cell mediated immunity. This new 5-dimensional model is based on the combination of a basic 3-dimensional HIV-1 model and a 4-dimensional immunity response model, which more realistically describes dynamics between the uninfected cells, infected cells, virus, the CTL response cells and CTL effector cells. Our 5-dimensional model may be reduced to the 4-dimensional model by applying a quasi-steady state assumption on the variable of virus. However, it is shown in this paper that virus is necessary to be involved in the modeling, and that a quasi-steady state assumption should be applied carefully, which may miss some important dynamical behavior of the system. Detailed bifurcation analysis is given to show that the system has three equilibrium solutions, namely the infection-free equilibrium, the infectious equilibrium without CTL, and the infectious equilibrium with CTL, and a series of bifurcations including two transcritical bifurcations and one or two possible Hopf bifurcations occur from these three equilibria as the basic reproduction number is varied. The mathematical methods applied in this paper include characteristic equations, Routh-Hurwitz condition, fluctuation lemma, Lyapunov function and computation of normal forms. Numerical simulation is also presented to demonstrate the applicability of the theoretical predictions.

  4. Development of a Twin-spool Turbofan Engine Simulation Using the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Johathan S.

    2014-01-01

    The Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3 of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  5. Artificial Intelligence: Underlying Assumptions and Basic Objectives.

    ERIC Educational Resources Information Center

    Cercone, Nick; McCalla, Gordon

    1984-01-01

    Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…

  6. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    PubMed

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .

  7. Conceptual Design of a New Damage Assessment Capability

    DTIC Science & Technology

    1978-03-01

    DDRES 0. ROetGRA ELEEN. RPRCT TAKN Decison -Scence pplictions1Inc 9 MONIORING ORAENCYI NAME ANDES~i ADiffREtoSS uto fte 10. SECURITY CLASSNT (Of ET...1_ . . -_ _- =.. = : -- L -_%_ ’_ The structure of the system makes it possible to evaluate the i variability and uncertainty in the damage...assumptions. The computational efficiency of ie system makes it possible to t use more detailed weapons-effects models and more accurate and complete

  8. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.

  9. Effects of inductive bias on computational evaluations of ligand-based modeling and on drug discovery

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2008-03-01

    Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.

  10. Computer simulation of earthquakes

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1976-01-01

    Two computer simulation models of earthquakes were studied for the dependence of the pattern of events on the model assumptions and input parameters. Both models represent the seismically active region by mechanical blocks which are connected to one another and to a driving plate. The blocks slide on a friction surface. In the first model elastic forces were employed and time independent friction to simulate main shock events. The size, length, and time and place of event occurrence were influenced strongly by the magnitude and degree of homogeniety in the elastic and friction parameters of the fault region. Periodically reoccurring similar events were frequently observed in simulations with near homogeneous parameters along the fault, whereas, seismic gaps were a common feature of simulations employing large variations in the fault parameters. The second model incorporated viscoelastic forces and time-dependent friction to account for aftershock sequences. The periods between aftershock events increased with time and the aftershock region was confined to that which moved in the main event.

  11. Empirical Estimation of Local Dielectric Constants: Toward Atomistic Design of Collagen Mimetic Peptides

    PubMed Central

    Pike, Douglas H.; Nanda, Vikas

    2017-01-01

    One of the key challenges in modeling protein energetics is the treatment of solvent interactions. This is particularly important in the case of peptides, where much of the molecule is highly exposed to solvent due to its small size. In this study, we develop an empirical method for estimating the local dielectric constant based on an additive model of atomic polarizabilities. Calculated values match reported apparent dielectric constants for a series of Staphylococcus aureus nuclease mutants. Calculated constants are used to determine screening effects on Coulombic interactions and to determine solvation contributions based on a modified Generalized Born model. These terms are incorporated into the protein modeling platform protCAD, and benchmarked on a data set of collagen mimetic peptides for which experimentally determined stabilities are available. Computing local dielectric constants using atomistic protein models and the assumption of additive atomic polarizabilities is a rapid and potentially useful method for improving electrostatics and solvation calculations that can be applied in the computational design of peptides. PMID:25784456

  12. DNS and modeling of the interaction between turbulent premixed flames and walls

    NASA Technical Reports Server (NTRS)

    Poinsot, T. J.; Haworth, D. C.

    1992-01-01

    The interaction between turbulent premixed flames and walls is studied using a two-dimensional full Navier-Stokes solver with simple chemistry. The effects of wall distance on the local and global flame structure are investigated. Quenching distances and maximum wall heat fluxes during quenching are computed in laminar cases and are found to be comparable to experimental and analytical results. For turbulent cases, it is shown that quenching distances and maximum heat fluxes remain of the same order as for laminar flames. Based on simulation results, a 'law-of-the-wall' model is derived to describe the interaction between a turbulent premixed flame and a wall. This model is constructed to provide reasonable behavior of flame surface density near a wall under the assumption that flame-wall interaction takes place at scales smaller than the computational mesh. It can be implemented in conjunction with any of several recent flamelet models based on a modeled surface density equation, with no additional constraints on mesh size or time step.

  13. Figure-ground organization and object recognition processes: an interactive account.

    PubMed

    Vecera, S P; O'Reilly, R C

    1998-04-01

    Traditional bottom-up models of visual processing assume that figure-ground organization precedes object recognition. This assumption seems logically necessary: How can object recognition occur before a region is labeled as figure? However, some behavioral studies find that familiar regions are more likely to be labeled figure than less familiar regions, a problematic finding for bottom-up models. An interactive account is proposed in which figure-ground processes receive top-down input from object representations in a hierarchical system. A graded, interactive computational model is presented that accounts for behavioral results in which familiarity effects are found. The interactive model offers an alternative conception of visual processing to bottom-up models.

  14. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  15. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  16. Comparative analysis of ventricular assist devices (POLVAD and POLVAD_EXT) based on multiscale FEM model.

    PubMed

    Milenin, Andrzej; Kopernik, Magdalena

    2011-01-01

    The prosthesis - pulsatory ventricular assist device (VAD) - is made of polyurethane (PU) and biocompatible TiN deposited by pulsed laser deposition (PLD) method. The paper discusses the numerical modelling and computer-aided design of such an artificial organ. Two types of VADs: POLVAD and POLVAD_EXT are investigated. The main tasks and assumptions of the computer program developed are presented. The multiscale model of VAD based on finite element method (FEM) is introduced and the analysis of the stress-strain state in macroscale for the blood chamber in both versions of VAD is shown, as well as the verification of the results calculated by applying ABAQUS, a commercial FEM code. The FEM code developed is based on a new approach to the simulation of multilayer materials obtained by using PLD method. The model in microscale includes two components, i.e., model of initial stresses (residual stress) caused by the deposition process and simulation of active loadings observed in the blood chamber of POLVAD and POLVAD_EXT. The computed distributions of stresses and strains in macro- and microscales are helpful in defining precisely the regions of blood chamber, which can be defined as the failure-source areas.

  17. The ozone depletion potentials on halocarbons: Their dependence of calculation assumptions

    NASA Technical Reports Server (NTRS)

    Karol, Igor L.; Kiselev, Andrey A.

    1994-01-01

    The concept of Ozone Depletion Potential (ODP) is widely used in the evaluation of numerous halocarbons and of their replacement effects on ozone, but the methods, assumptions and conditions used in ODP calculations have not been analyzed adequately. In this paper a model study of effects on ozone of the instantaneous releases of various amounts of CH3CCl3 and of CHF2Cl (HCFC-22) for several compositions of the background atmosphere are presented, aimed at understanding connections of ODP values with the assumptions used in their calculations. To facilitate the ODP computation in numerous versions for the long time periods after their releases, the above rather short-lived gases and the one-dimensional radiative photochemical model of the global annually averaged atmospheric layer up to 50 km height are used. The variation of released gas global mass from 1 Mt to 1 Gt leads to ODP value increase with its stabilization close to the upper bound of this range in the contemporary atmosphere. The same variations are analyzed for conditions of the CFC-free atmosphere of 1960's and for the anthropogenically loaded atmosphere in the 21st century according to the known IPCC 'business as usual' scenario. Recommendations for proper ways of ODP calculations are proposed for practically important cases.

  18. Cognitive architectures, rationality, and next-generation AI: a prolegomenon

    NASA Astrophysics Data System (ADS)

    Bello, Paul; Bringsjord, Selmer; Yang, Yingrui

    2004-08-01

    Computational models that give us insight into the behavior of individuals and the organizations to which they belong will be invaluable assets in our nation's war against terrorists, and state sponsorship of terror organizations. Reasoning and decision-making are essential ingredients in the formula for human cognition, yet the two have almost exclusively been studied in isolation from one another. While we have witnessed the emergence of strong traditions in both symbolic logic, and decision theory, we have yet to describe an acceptable interface between the two. Mathematical formulations of decision-making and reasoning have been developed extensively, but both fields make assumptions concerning human rationality that are untenable at best. True to this tradition, artificial intelligence has developed architectures for intelligent agents under these same assumptions. While these digital models of "cognition" tend to perform superbly, given their tremendous capacity for calculation, it is hardly reasonable to develop simulacra of human performance using these techniques. We will discuss some the challenges associated with the problem of developing integrated cognitive systems for use in modelling, simulation, and analysis, along with some ideas for the future.

  19. Structure induction in diagnostic causal reasoning.

    PubMed

    Meder, Björn; Mayrhofer, Ralf; Waldmann, Michael R

    2014-07-01

    Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner's beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in 2 studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go "beyond the information given" and use the available data to make inferences on the (unobserved) causal rather than on the (observed) data level. (c) 2014 APA, all rights reserved.

  20. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  1. Unstructured Grid Euler Method Assessment for Longitudinal and Lateral/Directional Aerodynamic Performance Analysis of the HSR Technology Concept Airplane at Supersonic Cruise Speed

    NASA Technical Reports Server (NTRS)

    Ghaffari, Farhad

    1999-01-01

    Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.

  2. An eco-hydrologic model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-03-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission and their consideration alongside climatic datasets. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear eco-hydrologic model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  3. An ecohydrological model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-08-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission driven by climatic time series. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear ecohydrological model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  4. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Wärme und Feuchte instationär Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  5. Moisture Risk in Unvented Attics Due to Air Leakage Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prahl, D.; Shaffer, M.

    2014-11-01

    IBACOS completed an initial analysis of moisture damage potential in an unvented attic insulated with closed-cell spray polyurethane foam. To complete this analysis, the research team collected field data, used computational fluid dynamics to quantify the airflow rates through individual airflow (crack) paths, simulated hourly flow rates through the leakage paths with CONTAM software, correlated the CONTAM flow rates with indoor humidity ratios from Building Energy Optimization software, and used Warme und Feuchte instationar Pro two-dimensional modeling to determine the moisture content of the building materials surrounding the cracks. Given the number of simplifying assumptions and numerical models associated withmore » this analysis, the results indicate that localized damage due to high moisture content of the roof sheathing is possible under very low airflow rates. Reducing the number of assumptions and approximations through field studies and laboratory experiments would be valuable to understand the real-world moisture damage potential in unvented attics.« less

  6. Cost-effectiveness of 64-slice CT angiography compared to conventional coronary angiography based on a coverage with evidence development study in Ontario.

    PubMed

    Goeree, Ron; Blackhouse, Gord; Bowen, James M; O'Reilly, Daria; Sutherland, Simone; Hopkins, Robert; Chow, Benjamin; Freeman, Michael; Provost, Yves; Dennie, Carole; Cohen, Eric; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan; Paul, Narinder; Parker, John D

    2013-10-01

    Conventional coronary angiography (CCA) is the standard diagnostic for coronary artery disease (CAD), but multi-detector computed tomography coronary angiography (CTCA) is a non-invasive alternative. A multi-center coverage with evidence development study was undertaken and combined with an economic model to estimate the cost-effectiveness of CTCA followed by CCA vs CCA alone. Alternative assumptions were tested in patient scenario and sensitivity analyses. CCA was found to dominate CTCA, however, CTCA was relatively more cost-effective in females, in advancing age, in patients with lower pre-test probabilities of CAD, the higher the sensitivity of CTCA and the lower the probability of undergoing a confirmatory CCA following a positive CTCA. RESULTS were very sensitive to alternative patient populations and modeling assumptions. Careful consideration of patient characteristics, procedures to improve the diagnostic yield of CTCA and selective use of CCA following CTCA will impact whether CTCA is cost-effective or dominates CCA.

  7. Calculation of wall effects of flow on a perforated wall with a code of surface singularities

    NASA Astrophysics Data System (ADS)

    Piat, J. F.

    1994-07-01

    Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).

  8. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    NASA Astrophysics Data System (ADS)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  9. Documentation of Helicopter Aeroelastic Stability Analysis Computer Program (HASTA)

    DTIC Science & Technology

    1977-12-01

    of the blade phasing assumption for which all blades of the rotor are identical and equally spaced azimuthally allows the size of the T. matrices...to be significantly reduced by the removal of the submatrices associated with blades other than the first blade. With the use of this assumption ...different program representational options such as the type of rotor system, the type of blades, and the use of the blade phasing assumption , the

  10. Using spatiotemporal statistical models to estimate animal abundance and infer ecological dynamics from survey counts

    USGS Publications Warehouse

    Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.

    2015-01-01

    Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.

  11. An exactly solvable, spatial model of mutation accumulation in cancer

    NASA Astrophysics Data System (ADS)

    Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej

    2016-12-01

    One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.

  12. Bayesian analysis of input uncertainty in hydrological modeling: 2. Application

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.

    2006-03-01

    The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.

  13. Modelling ADHD: A review of ADHD theories through their predictions for computational models of decision-making and reinforcement learning.

    PubMed

    Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido

    2016-12-01

    Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Human judgment vs. quantitative models for the management of ecological resources.

    PubMed

    Holden, Matthew H; Ellner, Stephen P

    2016-07-01

    Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.

  15. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences

    PubMed Central

    Rudd, Michael E.

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253

  16. A cortical edge-integration model of object-based lightness computation that explains effects of spatial context and individual differences.

    PubMed

    Rudd, Michael E

    2014-01-01

    Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.

  17. FRR: fair remote retrieval of outsourced private medical records in electronic health networks.

    PubMed

    Wang, Huaqun; Wu, Qianhong; Qin, Bo; Domingo-Ferrer, Josep

    2014-08-01

    Cloud computing is emerging as the next-generation IT architecture. However, cloud computing also raises security and privacy concerns since the users have no physical control over the outsourced data. This paper focuses on fairly retrieving encrypted private medical records outsourced to remote untrusted cloud servers in the case of medical accidents and disputes. Our goal is to enable an independent committee to fairly recover the original private medical records so that medical investigation can be carried out in a convincing way. We achieve this goal with a fair remote retrieval (FRR) model in which either t investigation committee members cooperatively retrieve the original medical data or none of them can get any information on the medical records. We realize the first FRR scheme by exploiting fair multi-member key exchange and homomorphic privately verifiable tags. Based on the standard computational Diffie-Hellman (CDH) assumption, our scheme is provably secure in the random oracle model (ROM). A detailed performance analysis and experimental results show that our scheme is efficient in terms of communication and computation. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    NASA Technical Reports Server (NTRS)

    Kamp, L. W.

    1976-01-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of our range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0-B5, luminosity classes III, IV, and V.

  19. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  20. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  1. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  2. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  3. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  4. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multilayer Calorimeters

    NASA Astrophysics Data System (ADS)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theoretical modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speedup factors of up to 100 000 × . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.

  5. Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm

    DOE PAGES

    Yang, Shang-Te; Ling, Hao

    2013-01-01

    An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less

  6. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  7. An immersed boundary method for modeling a dirty geometry data

    NASA Astrophysics Data System (ADS)

    Onishi, Keiji; Tsubokura, Makoto

    2017-11-01

    We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.

  8. Computer Applications in Teaching and Learning.

    ERIC Educational Resources Information Center

    Halley, Fred S.; And Others

    Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…

  9. The Status of Ubiquitous Computing.

    ERIC Educational Resources Information Center

    Brown, David G.; Petitto, Karen R.

    2003-01-01

    Explains the prevalence and rationale of ubiquitous computing on college campuses--teaching with the assumption or expectation that all faculty and students have access to the Internet--and offers lessons learned by pioneering institutions. Lessons learned involve planning, technology, implementation and management, adoption of computer-enhanced…

  10. CFD Analysis of Hypersonic Flowfields With Surface Thermochemistry and Ablation

    NASA Technical Reports Server (NTRS)

    Henline, W. D.

    1997-01-01

    In the past forty years much progress has been made in computational methods applied to the solution of problems in spacecraft hypervelocity flow and heat transfer. Although the basic thermochemical and physical modeling techniques have changed little in this time, several orders of magnitude increase in the speed of numerically solving the Navier-Stokes and associated energy equations have been achieved. The extent to which this computational power can be applied to the design of spacecraft heat shields is dependent on the proper coupling of the external flow equations to the boundary conditions and governing equations representing the thermal protection system in-depth conduction, pyrolysis and surface ablation phenomena. A discussion of the techniques used to do this in past problems as well as the current state-of-art is provided. Specific examples, including past missions such as Galileo, together with the more recent case studies of ESA/Rosetta Sample Comet Return, Mars Pathfinder and X-33 will be discussed. Modeling assumptions, design approach and computational methods and results are presented.

  11. Dynamic Mesh CFD Simulations of Orion Parachute Pendulum Motion During Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Halstrom, Logan D.; Schwing, Alan M.; Robinson, Stephen K.

    2016-01-01

    This paper demonstrates the usage of computational fluid dynamics to study the effects of pendulum motion dynamics of the NASAs Orion Multi-Purpose Crew Vehicle parachute system on the stability of the vehicles atmospheric entry and decent. Significant computational fluid dynamics testing has already been performed at NASAs Johnson Space Center, but this study sought to investigate the effect of bulk motion of the parachute, such as pitching, on the induced aerodynamic forces. Simulations were performed with a moving grid geometry oscillating according to the parameters observed in flight tests. As with the previous simulations, OVERFLOW computational fluid dynamics tool is used with the assumption of rigid, non-permeable geometry. Comparison to parachute wind tunnel tests is included for a preliminary validation of the dynamic mesh model. Results show qualitative differences in the flow fields of the static and dynamic simulations and quantitative differences in the induced aerodynamic forces, suggesting that dynamic mesh modeling of the parachute pendulum motion may uncover additional dynamic effects.

  12. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    PubMed

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  13. UFO - The Universal FEYNRULES Output

    NASA Astrophysics Data System (ADS)

    Degrande, Céline; Duhr, Claude; Fuks, Benjamin; Grellscheid, David; Mattelaer, Olivier; Reiter, Thomas

    2012-06-01

    We present a new model format for automatized matrix-element generators, the so-called Universal FEYNRULES Output (UFO). The format is universal in the sense that it features compatibility with more than one single generator and is designed to be flexible, modular and agnostic of any assumption such as the number of particles or the color and Lorentz structures appearing in the interaction vertices. Unlike other model formats where text files need to be parsed, the information on the model is encoded into a PYTHON module that can easily be linked to other computer codes. We then describe an interface for the MATHEMATICA package FEYNRULES that allows for an automatic output of models in the UFO format.

  14. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  15. Design Mining Interacting Wind Turbines.

    PubMed

    Preen, Richard J; Bull, Larry

    2016-01-01

    An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions were made. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogate-assisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined.

  16. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  17. Numerical Simulation of the Interaction of a Vortex with Stationary Airfoil in Transonic Flow,

    DTIC Science & Technology

    1984-01-12

    Goorjian, P. M., "Implicit Vortex Wakes ," AIAA Journal, Vol. 15, No. 4, April Finite- Difference Computations of Unsteady Transonic 1977, pp. 581-590... Difference Simulations of Three- tion of Wing- Vortex Interaction in Transonic Flow Dimensional Flow," AIAA Journal, Vol. 18, No. 2, Using Implicit...assumptions are made in p = density modeling the nonlinear vortex wake structure. Numerical algorithms based on the Euler equations p_ = free stream density

  18. Parametric Study of Radiative Cooling of Solid Antihydrogen

    DTIC Science & Technology

    1989-03-01

    knowledge of things academic and otherwise. 0 Abstract - .. . / ’A computer model of a cryogenic system for storing solid antimatter is used to explore the...radiative cooling-power requirements for long-term antimatter storage. If vacuum-chamber pressures as low as 1 torr can be reached, and the rest of the...large set of assumptions is valid, milligram quantities of solid antimatter could be stored indefinitely at 1.5 K using cooling powers of less than a

  19. Modeling Advance Life Support Systems

    NASA Technical Reports Server (NTRS)

    Pitts, Marvin; Sager, John; Loader, Coleen; Drysdale, Alan

    1996-01-01

    Activities this summer consisted of two projects that involved computer simulation of bioregenerative life support systems for space habitats. Students in the Space Life Science Training Program (SLSTP) used the simulation, space station, to learn about relationships between humans, fish, plants, and microorganisms in a closed environment. One student complete a six week project to modify the simulation by converting the microbes from anaerobic to aerobic, and then balancing the simulation's life support system. A detailed computer simulation of a closed lunar station using bioregenerative life support was attempted, but there was not enough known about system restraints and constants in plant growth, bioreactor design for space habitats and food preparation to develop an integrated model with any confidence. Instead of a completed detailed model with broad assumptions concerning the unknown system parameters, a framework for an integrated model was outlined and work begun on plant and bioreactor simulations. The NASA sponsors and the summer Fell were satisfied with the progress made during the 10 weeks, and we have planned future cooperative work.

  20. Haemodynamics of giant cerebral aneurysm: A comparison between the rigid-wall, one-way and two-way FSI models

    NASA Astrophysics Data System (ADS)

    Khe, A. K.; Cherevko, A. A.; Chupakhin, A. P.; Bobkova, M. S.; Krivoshapkin, A. L.; Orlov, K. Yu

    2016-06-01

    In this paper a computer simulation of a blood flow in cerebral vessels with a giant saccular aneurysm at the bifurcation of the basilar artery is performed. The modelling is based on patient-specific clinical data (both flow domain geometry and boundary conditions for the inlets and outlets). The hydrodynamic and mechanical parameters are calculated in the frameworks of three models: rigid-wall assumption, one-way FSI approach, and full (two-way) hydroelastic model. A comparison of the numerical solutions shows that mutual fluid- solid interaction can result in qualitative changes in the structure of the fluid flow. Other characteristics of the flow (pressure, stress, strain and displacement) qualitatively agree with each other in different approaches. However, the quantitative comparison shows that accounting for the flow-vessel interaction, in general, decreases the absolute values of these parameters. Solving of the hydroelasticity problem gives a more detailed solution at a cost of highly increased computational time.

  1. Comparative analysis of economic models in selected solar energy computer programs

    NASA Astrophysics Data System (ADS)

    Powell, J. W.; Barnes, K. A.

    1982-01-01

    The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.

  2. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  3. Analysis and assessment of STES technologies

    NASA Astrophysics Data System (ADS)

    Brown, D. R.; Blahnik, D. E.; Huber, H. D.

    1982-12-01

    Technical and economic assessments completed in FY 1982 in support of the Seasonal Thermal Energy Storage (STES) segment of the Underground Energy Storage Program included: (1) a detailed economic investigation of the cost of heat storage in aquifers, (2) documentation for AQUASTOR, a computer model for analyzing aquifer thermal energy storage (ATES) coupled with district heating or cooling, and (3) a technical and economic evaluation of several ice storage concepts. This paper summarizes the research efforts and main results of each of these three activities. In addition, a detailed economic investigation of the cost of chill storage in aquifers is currently in progress. The work parallels that done for ATES heat storage with technical and economic assumptions being varied in a parametric analysis of the cost of ATES delivered chill. The computer model AQUASTOR is the principal analytical tool being employed.

  4. The Cost of CAI: A Matter of Assumptions.

    ERIC Educational Resources Information Center

    Kearsley, Greg P.

    Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…

  5. Who to Blame: Irrational Decision-Makers or Stupid Modelers? (Arne Richter Award for Outstanding Young Scientists Lecture)

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh

    2016-04-01

    Water management benefits from a suite of modelling tools and techniques that help simplifying and understanding the complexities involved in managing water resource systems. Early water management models were mainly concerned with optimizing a single objective, related to the design, operations or management of water resource systems (e.g. economic cost, hydroelectricity production, reliability of water deliveries). Significant improvements in methodologies, computational capacity, and data availability over the last decades have resulted in developing more complex water management models that can now incorporate multiple objectives, various uncertainties, and big data. These models provide an improved understanding of complex water resource systems and provide opportunities for making positive impacts. Nevertheless, there remains an alarming mismatch between the optimal solutions developed by these models and the decisions made by managers and stakeholders of water resource systems. Modelers continue to consider decision makers as irrational agents who fail to implement the optimal solutions developed by sophisticated and mathematically rigours water management models. On the other hand, decision makers and stakeholders accuse modelers of being idealist, lacking a perfect understanding of reality, and developing 'smart' solutions that are not practical (stable). In this talk I will have a closer look at the mismatch between the optimality and stability of solutions and argue that conventional water resources management models suffer inherently from a full-cooperation assumption. According to this assumption, water resources management decisions are based on group rationality where in practice decisions are often based on individual rationality, making the group's optimal solution unstable for individually rational decision makers. I discuss how game theory can be used as an appropriate framework for addressing the irrational "rationality assumption" of water resources management models and for better capturing the social aspects of decision making in water management systems with multiple stakeholders.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, F.; Zimmerman, B.; Heard, F.

    A number of N Reactor core heatup studies have been performed using the TRUMP-BD computer code. These studies were performed to address questions concerning the dependency of results on potential variations in the material properties and/or modeling assumptions. This report described and documents a series of 31 TRUMP-BD runs that were performed to determine the sensitivity of calculated inner-fuel temperatures to a variety of TRUMP input parameters and also to a change in the node density in a high-temperature-gradient region. The results of this study are based on the 32-in. model. 18 refs., 17 figs., 2 tab.

  7. Alive and Kicking: Making the Case for Mainframe Education

    ERIC Educational Resources Information Center

    Murphy, Marianne C.; Sharma, Aditya; Seay, Cameron; McClelland, Marilyn K.

    2010-01-01

    As universities continually update and assess their curriculums, mainframe computing is quite often overlooked as it is often thought of as "legacy computer." Mainframe computing appears to be either uninteresting or thought of as a computer past its prime. However, both assumptions are leading to a shortage of IS professionals in the…

  8. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    PubMed

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  9. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  10. Free molecular collision cross section calculation methods for nanoparticles and complex ions with energy accommodation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larriba, Carlos, E-mail: clarriba@umn.edu; Hogan, Christopher J.

    2013-10-15

    The structures of nanoparticles, macromolecules, and molecular clusters in gas phase environments are often studied via measurement of collision cross sections. To directly compare structure models to measurements, it is hence necessary to have computational techniques available to calculate the collision cross sections of structural models under conditions matching measurements. However, presently available collision cross section methods contain the underlying assumption that collision between gas molecules and structures are completely elastic (gas molecule translational energy conserving) and specular, while experimental evidence suggests that in the most commonly used background gases for measurements, air and molecular nitrogen, gas molecule reemission ismore » largely inelastic (with exchange of energy between vibrational, rotational, and translational modes) and should be treated as diffuse in computations with fixed structural models. In this work, we describe computational techniques to predict the free molecular collision cross sections for fixed structural models of gas phase entities where inelastic and non-specular gas molecule reemission rules can be invoked, and the long range ion-induced dipole (polarization) potential between gas molecules and a charged entity can be considered. Specifically, two calculation procedures are described detail: a diffuse hard sphere scattering (DHSS) method, in which structures are modeled as hard spheres and collision cross sections are calculated for rectilinear trajectories of gas molecules, and a diffuse trajectory method (DTM), in which the assumption of rectilinear trajectories is relaxed and the ion-induced dipole potential is considered. Collision cross section calculations using the DHSS and DTM methods are performed on spheres, models of quasifractal aggregates of varying fractal dimension, and fullerene like structures. Techniques to accelerate DTM calculations by assessing the contribution of grazing gas molecule collisions (gas molecules with altered trajectories by the potential interaction) without tracking grazing trajectories are further discussed. The presented calculation techniques should enable more accurate collision cross section predictions under experimentally relevant conditions than pre-existing approaches, and should enhance the ability of collision cross section measurement schemes to discern the structures of gas phase entities.« less

  11. SIApopr: a computational method to simulate evolutionary branching trees for analysis of tumor clonal evolution.

    PubMed

    McDonald, Thomas O; Michor, Franziska

    2017-07-15

    SIApopr (Simulating Infinite-Allele populations) is an R package to simulate time-homogeneous and inhomogeneous stochastic branching processes under a very flexible set of assumptions using the speed of C ++. The software simulates clonal evolution with the emergence of driver and passenger mutations under the infinite-allele assumption. The software is an application of the Gillespie Stochastic Simulation Algorithm expanded to a large number of cell types and scenarios, with the intention of allowing users to easily modify existing models or create their own. SIApopr is available as an R library on Github ( https://github.com/olliemcdonald/siapopr ). Supplementary data are available at Bioinformatics online. michor@jimmy.harvard.edu. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Explaining evolution via constrained persistent perfect phylogeny

    PubMed Central

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to explain efficiently data that do not conform with the classical perfect phylogeny model. PMID:25572381

  13. Retrieve polarization aberration from image degradation: a new measurement method in DUV lithography

    NASA Astrophysics Data System (ADS)

    Xiang, Zhongbo; Li, Yanqiu

    2017-10-01

    Detailed knowledge of polarization aberration (PA) of projection lens in higher-NA DUV lithographic imaging is necessary due to its impact to imaging degradations, and precise measurement of PA is conductive to computational lithography techniques such as RET and OPC. Current in situ measurement method of PA thorough the detection of degradations of aerial images need to do linear approximation and apply the assumption of 3-beam/2-beam interference condition. The former approximation neglects the coupling effect of the PA coefficients, which would significantly influence the accuracy of PA retrieving. The latter assumption restricts the feasible pitch of test masks in higher-NA system, conflicts with the Kirhhoff diffraction model of test mask used in retrieving model, and introduces 3D mask effect as a source of retrieving error. In this paper, a new in situ measurement method of PA is proposed. It establishes the analytical quadratic relation between the PA coefficients and the degradations of aerial images of one-dimensional dense lines in coherent illumination through vector aerial imaging, which does not rely on the assumption of 3-beam/2- beam interference and linear approximation. In this case, the retrieval of PA from image degradation can be convert from the nonlinear system of m-quadratic equations to a multi-objective quadratic optimization problem, and finally be solved by nonlinear least square method. Some preliminary simulation results are given to demonstrate the correctness and accuracy of the new PA retrieving model.

  14. Hypersonic Combustor Model Inlet CFD Simulations and Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; TokarcikPolsky, S.; Deiwert, G. S.; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    Numerous two-and three-dimensional computational simulations were performed for the inlet associated with the combustor model for the hypersonic propulsion experiment in the NASA Ames 16-Inch Shock Tunnel. The inlet was designed to produce a combustor-inlet flow that is nearly two-dimensional and of sufficient mass flow rate for large scale combustor testing. The three-dimensional simulations demonstrated that the inlet design met all the design objectives and that the inlet produced a very nearly two-dimensional combustor inflow profile. Numerous two-dimensional simulations were performed with various levels of approximations such as in the choice of chemical and physical models, as well as numerical approximations. Parametric studies were conducted to better understand and to characterize the inlet flow. Results from the two-and three-dimensional simulations were used to predict the mass flux entering the combustor and a mass flux correlation as a function of facility stagnation pressure was developed. Surface heat flux and pressure measurements were compared with the computed results and good agreement was found. The computational simulations helped determine the inlet low characteristics in the high enthalpy environment, the important parameters that affect the combustor-inlet flow, and the sensitivity of the inlet flow to various modeling assumptions.

  15. A hybrid finite element-transfer matrix model for vibroacoustic systems with flat and homogeneous acoustic treatments.

    PubMed

    Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck

    2015-02-01

    Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.

  16. BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections

    NASA Astrophysics Data System (ADS)

    Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus

    2017-07-01

    Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.

  17. Assessing the skill of hydrology models at simulating the water cycle in the HJ Andrews LTER: Assumptions, strengths and weaknesses

    EPA Science Inventory

    Simulated impacts of climate on hydrology can vary greatly as a function of the scale of the input data, model assumptions, and model structure. Four models are commonly used to simulate streamflow in model assumptions, and model structure. Four models are commonly used to simu...

  18. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  19. Real-Gas Flow Properties for NASA Langley Research Center Aerothermodynamic Facilities Complex Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.

    1996-01-01

    A computational algorithm has been developed which can be employed to determine the flow properties of an arbitrary real (virial) gas in a wind tunnel. A multiple-coefficient virial gas equation of state and the assumption of isentropic flow are used to model the gas and to compute flow properties throughout the wind tunnel. This algorithm has been used to calculate flow properties for the wind tunnels of the Aerothermodynamics Facilities Complex at the NASA Langley Research Center, in which air, CF4. He, and N2 are employed as test gases. The algorithm is detailed in this paper and sample results are presented for each of the Aerothermodynamic Facilities Complex wind tunnels.

  20. Integration of Geophysical Data into Structural Geological Modelling through Bayesian Networks

    NASA Astrophysics Data System (ADS)

    de la Varga, Miguel; Wellmann, Florian; Murdie, Ruth

    2016-04-01

    Structural geological models are widely used to represent the spatial distribution of relevant geological features. Several techniques exist to construct these models on the basis of different assumptions and different types of geological observations (e.g. Jessell et al., 2014). However, two problems are prevalent when constructing models: (i) observations and assumptions, and therefore also the constructed model, are subject to uncertainties, and (ii) additional information, such as geophysical data, is often available, but cannot be considered directly in the geological modelling step. In our work, we propose the integration of all available data into a Bayesian network including the generation of the implicit geological method by means of interpolation functions (Mallet, 1992; Lajaunie et al., 1997; Mallet, 2004; Carr et al., 2001; Hillier et al., 2014). As a result, we are able to increase the certainty of the resultant models as well as potentially learn features of our regional geology through data mining and information theory techniques. MCMC methods are used in order to optimize computational time and assure the validity of the results. Here, we apply the aforementioned concepts in a 3-D model of the Sandstone Greenstone Belt in the Archean Yilgarn Craton in Western Australia. The example given, defines the uncertainty in the thickness of greenstone as limited by Bouguer anomaly and the internal structure of the greenstone as limited by the magnetic signature of a banded iron formation. The incorporation of the additional data and specially the gravity provides an important reduction of the possible outcomes and therefore the overall uncertainty. References Carr, C. J., K. R. Beatson, B. J. Cherrie, J. T. Mitchell, R. W. Fright, C. B. McCallum, and R. T. Evans, 2001, Reconstruction and representation of 3D objects with radial basis functions: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 67-76. Jessell, M., Aillères, L., de Kemp, E., Lindsay, M., Wellmann, F., Hillier, M., ... & Martin, R. (2014). Next Generation Three-Dimensional Geologic Modeling and Inversion. Lajaunie, C., G. Courrioux, and L. Manuel, 1997, Foliation fields and 3D cartography in geology: Principles of a method based on potential interpolation: Mathematical Geology, 29, 571-584. Mallet, J.-L., 1992, Discrete smooth interpolation in geometric modelling: Computer-Aided Design, 24, 178-191 Mallet, L. J., 2004, Space-time mathematical framework for sedimentary geology: Mathematical Geology, 36, 1-32.

  1. Examination of various turbulence models for application in liquid rocket thrust chambers

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1991-01-01

    There is a large variety of turbulence models available. These models include direct numerical simulation, large eddy simulation, Reynolds stress/flux model, zero equation model, one equation model, two equation k-epsilon model, multiple-scale model, etc. Each turbulence model contains different physical assumptions and requirements. The natures of turbulence are randomness, irregularity, diffusivity and dissipation. The capabilities of the turbulence models, including physical strength, weakness, limitations, as well as numerical and computational considerations, are reviewed. Recommendations are made for the potential application of a turbulence model in thrust chamber and performance prediction programs. The full Reynolds stress model is recommended. In a workshop, specifically called for the assessment of turbulence models for applications in liquid rocket thrust chambers, most of the experts present were also in favor of the recommendation of the Reynolds stress model.

  2. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  3. Wind laws for shockless initialization. [numerical forecasting model

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Shkoller, B.

    1976-01-01

    A system of diagnostic equations for the velocity field, or wind laws, was derived for each of a number of models of large-scale atmospheric flow. The derivation in each case is mathematically exact and does not involve any physical assumptions not already present in the prognostic equations, such as nondivergence or vanishing of derivatives of the divergence. Therefore, initial states computed by solving these diagnostic equations should be compatible with the type of motion described by the prognostic equations of the model and should not generate initialization shocks when inserted into the model. Numerical solutions of the diagnostic system corresponding to a barotropic model are exhibited. Some problems concerning the possibility of implementing such a system in operational numerical weather prediction are discussed.

  4. Computational compliance criteria in water hammer modelling

    NASA Astrophysics Data System (ADS)

    Urbanowicz, Kamil

    2017-10-01

    Among many numerical methods (finite: difference, element, volume etc.) used to solve the system of partial differential equations describing unsteady pipe flow, the method of characteristics (MOC) is most appreciated. With its help, it is possible to examine the effect of numerical discretisation carried over the pipe length. It was noticed, based on the tests performed in this study, that convergence of the calculation results occurred on a rectangular grid with the division of each pipe of the analysed system into at least 10 elements. Therefore, it is advisable to introduce computational compliance criteria (CCC), which will be responsible for optimal discretisation of the examined system. The results of this study, based on the assumption of various values of the Courant-Friedrichs-Levy (CFL) number, indicate also that the CFL number should be equal to one for optimum computational results. Application of the CCC criterion to own written and commercial computer programmes based on the method of characteristics will guarantee fast simulations and the necessary computational coherence.

  5. Development of a Twin-Spool Turbofan Engine Simulation Using the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2014-01-01

    The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3% of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.

  6. Do Responses to Different Anthropogenic Forcings Add Linearly in Climate Models?

    NASA Technical Reports Server (NTRS)

    Marvel, Kate; Schmidt, Gavin A.; Shindell, Drew; Bonfils, Celine; LeGrande, Allegra N.; Nazarenko, Larissa; Tsigaridis, Kostas

    2015-01-01

    Many detection and attribution and pattern scaling studies assume that the global climate response to multiple forcings is additive: that the response over the historical period is statistically indistinguishable from the sum of the responses to individual forcings. Here, we use the NASA Goddard Institute for Space Studies (GISS) and National Center for Atmospheric Research Community Climate System Model (CCSM) simulations from the CMIP5 archive to test this assumption for multi-year trends in global-average, annual-average temperature and precipitation at multiple timescales. We find that responses in models forced by pre-computed aerosol and ozone concentrations are generally additive across forcings; however, we demonstrate that there are significant nonlinearities in precipitation responses to di?erent forcings in a configuration of the GISS model that interactively computes these concentrations from precursor emissions. We attribute these to di?erences in ozone forcing arising from interactions between forcing agents. Our results suggest that attribution to specific forcings may be complicated in a model with fully interactive chemistry and may provide motivation for other modeling groups to conduct further single-forcing experiments.

  7. Do responses to different anthropogenic forcings add linearly in climate models?

    DOE PAGES

    Marvel, Kate; Schmidt, Gavin A.; Shindell, Drew; ...

    2015-10-14

    Many detection and attribution and pattern scaling studies assume that the global climate response to multiple forcings is additive: that the response over the historical period is statistically indistinguishable from the sum of the responses to individual forcings. Here, we use the NASA Goddard Institute for Space Studies (GISS) and National Center for Atmospheric Research Community Climate System Model (CCSM4) simulations from the CMIP5 archive to test this assumption for multi-year trends in global-average, annual-average temperature and precipitation at multiple timescales. We find that responses in models forced by pre-computed aerosol and ozone concentrations are generally additive across forcings. However,more » we demonstrate that there are significant nonlinearities in precipitation responses to different forcings in a configuration of the GISS model that interactively computes these concentrations from precursor emissions. We attribute these to differences in ozone forcing arising from interactions between forcing agents. Lastly, our results suggest that attribution to specific forcings may be complicated in a model with fully interactive chemistry and may provide motivation for other modeling groups to conduct further single-forcing experiments.« less

  8. Simplifying the Reuse and Interoperability of Geoscience Data Sets and Models with Semantic Metadata that is Human-Readable and Machine-actionable

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2017-12-01

    Standardized, deep descriptions of digital resources (e.g. data sets, computational models, software tools and publications) make it possible to develop user-friendly software systems that assist scientists with the discovery and appropriate use of these resources. Semantic metadata makes it possible for machines to take actions on behalf of humans, such as automatically identifying the resources needed to solve a given problem, retrieving them and then automatically connecting them (despite their heterogeneity) into a functioning workflow. Standardized model metadata also helps model users to understand the important details that underpin computational models and to compare the capabilities of different models. These details include simplifying assumptions on the physics, governing equations and the numerical methods used to solve them, discretization of space (the grid) and time (the time-stepping scheme), state variables (input or output), model configuration parameters. This kind of metadata provides a "deep description" of a computational model that goes well beyond other types of metadata (e.g. author, purpose, scientific domain, programming language, digital rights, provenance, execution) and captures the science that underpins a model. A carefully constructed, unambiguous and rules-based schema to address this problem, called the Geoscience Standard Names ontology will be presented that utilizes Semantic Web best practices and technologies. It has also been designed to work across science domains and to be readable by both humans and machines.

  9. Assessing Airflow Sensitivity to Healthy and Diseased Lung Conditions in a Computational Fluid Dynamics Model Validated In Vitro.

    PubMed

    Sul, Bora; Oppito, Zachary; Jayasekera, Shehan; Vanger, Brian; Zeller, Amy; Morris, Michael; Ruppert, Kai; Altes, Talissa; Rakesh, Vineet; Day, Steven; Robinson, Risa; Reifman, Jaques; Wallqvist, Anders

    2018-05-01

    Computational models are useful for understanding respiratory physiology. Crucial to such models are the boundary conditions specifying the flow conditions at truncated airway branches (terminal flow rates). However, most studies make assumptions about these values, which are difficult to obtain in vivo. We developed a computational fluid dynamics (CFD) model of airflows for steady expiration to investigate how terminal flows affect airflow patterns in respiratory airways. First, we measured in vitro airflow patterns in a physical airway model, using particle image velocimetry (PIV). The measured and computed airflow patterns agreed well, validating our CFD model. Next, we used the lobar flow fractions from a healthy or chronic obstructive pulmonary disease (COPD) subject as constraints to derive different terminal flow rates (i.e., three healthy and one COPD) and computed the corresponding airflow patterns in the same geometry. To assess airflow sensitivity to the boundary conditions, we used the correlation coefficient of the shape similarity (R) and the root-mean-square of the velocity magnitude difference (Drms) between two velocity contours. Airflow patterns in the central airways were similar across healthy conditions (minimum R, 0.80) despite variations in terminal flow rates but markedly different for COPD (minimum R, 0.26; maximum Drms, ten times that of healthy cases). In contrast, those in the upper airway were similar for all cases. Our findings quantify how variability in terminal and lobar flows contributes to airflow patterns in respiratory airways. They highlight the importance of using lobar flow fractions to examine physiologically relevant airflow characteristics.

  10. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  11. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  12. Stochastic modeling indicates that aging and somatic evolution in the hematopoetic system are driven by non-cell-autonomous processes.

    PubMed

    Rozhok, Andrii I; Salstrom, Jennifer L; DeGregori, James

    2014-12-01

    Age-dependent tissue decline and increased cancer incidence are widely accepted to be rate-limited by the accumulation of somatic mutations over time. Current models of carcinogenesis are dominated by the assumption that oncogenic mutations have defined advantageous fitness effects on recipient stem and progenitor cells, promoting and rate-limiting somatic evolution. However, this assumption is markedly discrepant with evolutionary theory, whereby fitness is a dynamic property of a phenotype imposed upon and widely modulated by environment. We computationally modeled dynamic microenvironment-dependent fitness alterations in hematopoietic stem cells (HSC) within the Sprengel-Liebig system known to govern evolution at the population level. Our model for the first time integrates real data on age-dependent dynamics of HSC division rates, pool size, and accumulation of genetic changes and demonstrates that somatic evolution is not rate-limited by the occurrence of mutations, but instead results from aged microenvironment-driven alterations in the selective/fitness value of previously accumulated genetic changes. Our results are also consistent with evolutionary models of aging and thus oppose both somatic mutation-centric paradigms of carcinogenesis and tissue functional decline. In total, we demonstrate that aging directly promotes HSC fitness decline and somatic evolution via non-cell-autonomous mechanisms.

  13. A comparison of linear interpolation models for iterative CT reconstruction.

    PubMed

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.

  14. Violation of the Sphericity Assumption and Its Effect on Type-I Error Rates in Repeated Measures ANOVA and Multi-Level Linear Models (MLM).

    PubMed

    Haverkamp, Nicolas; Beauducel, André

    2017-01-01

    We investigated the effects of violations of the sphericity assumption on Type I error rates for different methodical approaches of repeated measures analysis using a simulation approach. In contrast to previous simulation studies on this topic, up to nine measurement occasions were considered. Effects of the level of inter-correlations between measurement occasions on Type I error rates were considered for the first time. Two populations with non-violation of the sphericity assumption, one with uncorrelated measurement occasions and one with moderately correlated measurement occasions, were generated. One population with violation of the sphericity assumption combines uncorrelated with highly correlated measurement occasions. A second population with violation of the sphericity assumption combines moderately correlated and highly correlated measurement occasions. From these four populations without any between-group effect or within-subject effect 5,000 random samples were drawn. Finally, the mean Type I error rates for Multilevel linear models (MLM) with an unstructured covariance matrix (MLM-UN), MLM with compound-symmetry (MLM-CS) and for repeated measures analysis of variance (rANOVA) models (without correction, with Greenhouse-Geisser-correction, and Huynh-Feldt-correction) were computed. To examine the effect of both the sample size and the number of measurement occasions, sample sizes of n = 20, 40, 60, 80, and 100 were considered as well as measurement occasions of m = 3, 6, and 9. With respect to rANOVA, the results plead for a use of rANOVA with Huynh-Feldt-correction, especially when the sphericity assumption is violated, the sample size is rather small and the number of measurement occasions is large. For MLM-UN, the results illustrate a massive progressive bias for small sample sizes ( n = 20) and m = 6 or more measurement occasions. This effect could not be found in previous simulation studies with a smaller number of measurement occasions. The proportionality of bias and number of measurement occasions should be considered when MLM-UN is used. The good news is that this proportionality can be compensated by means of large sample sizes. Accordingly, MLM-UN can be recommended even for small sample sizes for about three measurement occasions and for large sample sizes for about nine measurement occasions.

  15. 76 FR 52353 - Assumption Buster Workshop: “Current Implementations of Cloud Computing Indicate a New Approach...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... explored in this series is cloud computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data...

  16. Radiological performance assessment for the E-Area Vaults Disposal Facility. Appendices A through M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.R.

    1994-04-15

    These document contains appendices A-M for the performance assessment. They are A: details of models and assumptions, B: computer codes, C: data tabulation, D: geochemical interactions, E: hydrogeology of the Savannah River Site, F: software QA plans, G: completeness review guide, H: performance assessment peer review panel recommendations, I: suspect soil performance analysis, J: sensitivity/uncertainty analysis, K: vault degradation study, L: description of naval reactor waste disposal, M: porflow input file. (GHH)

  17. A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter

    NASA Technical Reports Server (NTRS)

    Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe

    1996-01-01

    Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.

  18. Automatic item generation implemented for measuring artistic judgment aptitude.

    PubMed

    Bezruczko, Nikolaus

    2014-01-01

    Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.

  19. The possibility of coexistence and co-development in language competition: ecology-society computational model and simulation.

    PubMed

    Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie

    2016-01-01

    Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.

  20. TerraFERMA: The Transparent Finite Element Rapid Model Assembler for multiphysics problems in Earth sciences

    NASA Astrophysics Data System (ADS)

    Wilson, Cian R.; Spiegelman, Marc; van Keken, Peter E.

    2017-02-01

    We introduce and describe a new software infrastructure TerraFERMA, the Transparent Finite Element Rapid Model Assembler, for the rapid and reproducible description and solution of coupled multiphysics problems. The design of TerraFERMA is driven by two computational needs in Earth sciences. The first is the need for increased flexibility in both problem description and solution strategies for coupled problems where small changes in model assumptions can lead to dramatic changes in physical behavior. The second is the need for software and models that are more transparent so that results can be verified, reproduced, and modified in a manner such that the best ideas in computation and Earth science can be more easily shared and reused. TerraFERMA leverages three advanced open-source libraries for scientific computation that provide high-level problem description (FEniCS), composable solvers for coupled multiphysics problems (PETSc), and an options handling system (SPuD) that allows the hierarchical management of all model options. TerraFERMA integrates these libraries into an interface that organizes the scientific and computational choices required in a model into a single options file from which a custom compiled application is generated and run. Because all models share the same infrastructure, models become more reusable and reproducible, while still permitting the individual researcher considerable latitude in model construction. TerraFERMA solves partial differential equations using the finite element method. It is particularly well suited for nonlinear problems with complex coupling between components. TerraFERMA is open-source and available at http://terraferma.github.io, which includes links to documentation and example input files.

  1. Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Shafer, Mary Morello

    Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…

  2. Computer Applications and Technology 105.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education and Training, Winnipeg.

    Designed to promote Manitoba students' familiarity with computer technology and their ability to interact with that technology, the Computer Applications and Technology 105 course is a one-credit course presented in 15 topical, non-sequential units that require 110-120 hours of instruction time. It has been developed with the assumption that each…

  3. The Influence of Computer Technology Learning Program on Attitudes toward Computers and Self-Esteem among Arab Dropout Youth.

    ERIC Educational Resources Information Center

    Romi, Shlomo; Zoabi, Houssien

    2003-01-01

    Describes a study that examined the attitudes of Arab dropout youth in Israel toward the use of computer technology and the influence of this use on their self-esteem. Results supported the assumptions that exposure to computer technology would change the attitudes of dropout adolescents toward computers to positive ones. (Contains 43 references.)…

  4. Robust estimators for speech enhancement in real environments

    NASA Astrophysics Data System (ADS)

    Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly

    2015-09-01

    Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.

  5. A radiosity-based model to compute the radiation transfer of soil surface

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Li, Yuguang

    2011-11-01

    A good understanding of interactions of electromagnetic radiation with soil surface is important for a further improvement of remote sensing methods. In this paper, a radiosity-based analytical model for soil Directional Reflectance Factor's (DRF) distributions was developed and evaluated. The model was specifically dedicated to the study of radiation transfer for the soil surface under tillage practices. The soil was abstracted as two dimensional U-shaped or V-shaped geometric structures with periodic macroscopic variations. The roughness of the simulated surfaces was expressed as a ratio of the height to the width for the U and V-shaped structures. The assumption was made that the shadowing of soil surface, simulated by U or V-shaped grooves, has a greater influence on the soil reflectance distribution than the scattering properties of basic soil particles of silt and clay. Another assumption was that the soil is a perfectly diffuse reflector at a microscopic level, which is a prerequisite for the application of the radiosity method. This radiosity-based analytical model was evaluated by a forward Monte Carlo ray-tracing model under the same structural scenes and identical spectral parameters. The statistics of these two models' BRF fitting results for several soil structures under the same conditions showed the good agreements. By using the model, the physical mechanism of the soil bidirectional reflectance pattern was revealed.

  6. OpenSim Versus Human Body Model: A Comparison Study for the Lower Limbs During Gait.

    PubMed

    Falisse, Antoine; Van Rossom, Sam; Gijsbers, Johannes; Steenbrink, Frans; van Basten, Ben J H; Jonkers, Ilse; van den Bogert, Antonie J; De Groote, Friedl

    2018-05-29

    Musculoskeletal modeling and simulations have become popular tools for analyzing human movements. However, end-users are often not aware of underlying modeling and computational assumptions. This study investigates how these assumptions affect biomechanical gait analysis outcomes performed with Human Body Model and the OpenSim gait2392 model. We compared joint kinematics, kinetics, and muscle forces resulting from processing data from seven healthy adults with both models. Although outcome variables had similar patterns, there were statistically significant differences in joint kinematics (maximal difference: 9.8 ± 1.5 degrees in sagittal plane hip rotation), kinetics (maximal difference: 0.36 ± 0.10 N·m/kg in sagittal plane hip moment), and muscle forces (maximal difference: 8.51 ± 1.80 N/kg for psoas). These differences might be explained by differences in hip and knee joint center locations up to 2.4 ± 0.5 and 1.9 ± 0.2 cm in the postero-anterior and infero-superior directions, respectively, and by the offset in pelvic reference frames of about 10 degrees around the medio-lateral axis. Model choice may not influence the conclusions in clinical settings where the focus is on interpreting deviations from reference data but will affect the conclusions of mechanical analyses where the goal is to obtain accurate estimates of kinematics and loading.

  7. A Thin Lens Model for Charged-Particle RF Accelerating Gaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Christopher K.

    Presented is a thin-lens model for an RF accelerating gap that considers general axial fields without energy dependence or other a priori assumptions. Both the cosine and sine transit time factors (i.e., Fourier transforms) are required plus two additional functions; the Hilbert transforms the transit-time factors. The combination yields a complex-valued Hamiltonian rotating in the complex plane with synchronous phase. Using Hamiltonians the phase and energy gains are computed independently in the pre-gap and post-gap regions then aligned using the asymptotic values of wave number. Derivations of these results are outlined, examples are shown, and simulations with the model aremore » presented.« less

  8. Advance finite element modeling of rotor blade aeroelasticity

    NASA Technical Reports Server (NTRS)

    Straub, F. K.; Sangha, K. B.; Panda, B.

    1994-01-01

    An advanced beam finite element has been developed for modeling rotor blade dynamics and aeroelasticity. This element is part of the Element Library of the Second Generation Comprehensive Helicopter Analysis System (2GCHAS). The element allows modeling of arbitrary rotor systems, including bearingless rotors. It accounts for moderately large elastic deflections, anisotropic properties, large frame motion for maneuver simulation, and allows for variable order shape functions. The effects of gravity, mechanically applied and aerodynamic loads are included. All kinematic quantities required to compute airloads are provided. In this paper, the fundamental assumptions and derivation of the element matrices are presented. Numerical results are shown to verify the formulation and illustrate several features of the element.

  9. Semi-Empirical Modeling of SLD Physics

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Potapczuk, Mark G.

    2004-01-01

    The effects of supercooled large droplets (SLD) in icing have been an area of much interest in recent years. As part of this effort, the assumptions used for ice accretion software have been reviewed. A literature search was performed to determine advances from other areas of research that could be readily incorporated. Experimental data in the SLD regime was also analyzed. A semi-empirical computational model is presented which incorporates first order physical effects of large droplet phenomena into icing software. This model has been added to the LEWICE software. Comparisons are then made to SLD experimental data that has been collected to date. Results will be presented for the comparison of water collection efficiency, ice shape and ice mass.

  10. On firework blasts and qualitative parameter dependency.

    PubMed

    Zohdi, T I

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given.

  11. On firework blasts and qualitative parameter dependency

    PubMed Central

    Zohdi, T. I.

    2016-01-01

    In this paper, a mathematical model is developed to qualitatively simulate the progressive time-evolution of a blast from a simple firework. Estimates are made for the blast radius that one can expect for a given amount of detonation energy and pyrotechnic display material. The model balances the released energy from the initial blast pulse with the subsequent kinetic energy and then computes the trajectory of the material under the influence of the drag from the surrounding air, gravity and possible buoyancy. Under certain simplifying assumptions, the model can be solved for analytically. The solution serves as a guide to identifying key parameters that control the evolving blast envelope. Three-dimensional examples are given. PMID:26997903

  12. The Use of Object-Oriented Analysis Methods in Surety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.

    1999-05-01

    Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less

  13. Modeling axisymmetric flow and transport

    USGS Publications Warehouse

    Langevin, C.D.

    2008-01-01

    Unmodified versions of common computer programs such as MODFLOW, MT3DMS, and SEAWAT that use Cartesian geometry can accurately simulate axially symmetric ground water flow and solute transport. Axisymmetric flow and transport are simulated by adjusting several input parameters to account for the increase in flow area with radial distance from the injection or extraction well. Logarithmic weighting of interblock transmissivity, a standard option in MODFLOW, can be used for axisymmetric models to represent the linear change in hydraulic conductance within a single finite-difference cell. Results from three test problems (ground water extraction, an aquifer push-pull test, and upconing of saline water into an extraction well) show good agreement with analytical solutions or with results from other numerical models designed specifically to simulate the axisymmetric geometry. Axisymmetric models are not commonly used but can offer an efficient alternative to full three-dimensional models, provided the assumption of axial symmetry can be justified. For the upconing problem, the axisymmetric model was more than 1000 times faster than an equivalent three-dimensional model. Computational gains with the axisymmetric models may be useful for quickly determining appropriate levels of grid resolution for three-dimensional models and for estimating aquifer parameters from field tests.

  14. Unsteady wind loads for TMT: replacing parametric models with CFD

    NASA Astrophysics Data System (ADS)

    MacMartin, Douglas G.; Vogiatzis, Konstantinos

    2014-08-01

    Unsteady wind loads due to turbulence inside the telescope enclosure result in image jitter and higher-order image degradation due to M1 segment motion. Advances in computational fluid dynamics (CFD) allow unsteady simulations of the flow around realistic telescope geometry, in order to compute the unsteady forces due to wind turbulence. These simulations can then be used to understand the characteristics of the wind loads. Previous estimates used a parametric model based on a number of assumptions about the wind characteristics, such as a von Karman spectrum and frozen-flow turbulence across M1, and relied on CFD only to estimate parameters such as mean wind speed and turbulent kinetic energy. Using the CFD-computed forces avoids the need for assumptions regarding the flow. We discuss here both the loads on the telescope that lead to image jitter, and the spatially-varying force distribution across the primary mirror, using simulations with the Thirty Meter Telescope (TMT) geometry. The amplitude, temporal spectrum, and spatial distribution of wind disturbances are all estimated; these are then used to compute the resulting image motion and degradation. There are several key differences relative to our earlier parametric model. First, the TMT enclosure provides sufficient wind reduction at the top end (near M2) to render the larger cross-sectional structural areas further inside the enclosure (including M1) significant in determining the overall image jitter. Second, the temporal spectrum is not von Karman as the turbulence is not fully developed; this applies both in predicting image jitter and M1 segment motion. And third, for loads on M1, the spatial characteristics are not consistent with propagating a frozen-flow turbulence screen across the mirror: Frozen flow would result in a relationship between temporal frequency content and spatial frequency content that does not hold in the CFD predictions. Incorporating the new estimates of wind load characteristics into TMT response predictions leads to revised estimates of the response of TMT to wind turbulence, and validates the aerodynamic design of the enclosure.

  15. Validations of CFD against detailed velocity and pressure measurements in water turbine runner flow

    NASA Astrophysics Data System (ADS)

    Nilsson, H.; Davidson, L.

    2003-03-01

    This work compares CFD results with experimental results of the flow in two different kinds of water turbine runners. The runners studied are the GAMM Francis runner and the Hölleforsen Kaplan runner. The GAMM Francis runner was used as a test case in the 1989 GAMM Workshop on 3D Computation of Incompressible Internal Flows where the geometry and detailed best efficiency measurements were made available. In addition to the best efficiency measurements, four off-design operating condition measurements are used for the comparisons in this work. The Hölleforsen Kaplan runner was used at the 1999 Turbine 99 and 2001 Turbine 99 - II workshops on draft tube flow, where detailed measurements made after the runner were used as inlet boundary conditions for the draft tube computations. The measurements are used here to validate computations of the flow in the runner.The computations are made in a single runner blade passage where the inlet boundary conditions are obtained from an extrapolation of detailed measurements (GAMM) or from separate guide vane computations (Hölleforsen). The steady flow in a rotating co-ordinate system is computed. The effects of turbulence are modelled by a low-Reynolds number k- turbulence model, which removes some of the assumptions of the commonly used wall function approach and brings the computations one step further.

  16. Internal Models, Vestibular Cognition, and Mental Imagery: Conceptual Considerations.

    PubMed

    Mast, Fred W; Ellis, Andrew W

    2015-01-01

    Vestibular cognition has recently gained attention. Despite numerous experimental and clinical demonstrations, it is not yet clear what vestibular cognition really is. For future research in vestibular cognition, adopting a computational approach will make it easier to explore the underlying mechanisms. Indeed, most modeling approaches in vestibular science include a top-down or a priori component. We review recent Bayesian optimal observer models, and discuss in detail the conceptual value of prior assumptions, likelihood and posterior estimates for research in vestibular cognition. We then consider forward models in vestibular processing, which are required in order to distinguish between sensory input that is induced by active self-motion, and sensory input that is due to passive self-motion. We suggest that forward models are used not only in the service of estimating sensory states but they can also be drawn upon in an offline mode (e.g., spatial perspective transformations), in which interaction with sensory input is not desired. A computational approach to vestibular cognition will help to discover connections across studies, and it will provide a more coherent framework for investigating vestibular cognition.

  17. Excessive computer game playing: evidence for addiction and aggression?

    PubMed

    Grüsser, S M; Thalemann, R; Griffiths, M D

    2007-04-01

    Computer games have become an ever-increasing part of many adolescents' day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as "computer/video game addiction" have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample comprising of 7069 gamers answered two questionnaires online. Data revealed that 11.9% of participants (840 gamers) fulfilled diagnostic criteria of addiction concerning their gaming behavior, while there is only weak evidence for the assumption that aggressive behavior is interrelated with excessive gaming in general. Results of this study contribute to the assumption that also playing games without monetary reward meets criteria of addiction. Hence, an addictive potential of gaming should be taken into consideration regarding prevention and intervention.

  18. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  19. A stratospheric aerosol model with perturbations induced by the space shuttle particulate effluents

    NASA Technical Reports Server (NTRS)

    Rosen, J. M.; Hofmann, D. J.

    1977-01-01

    A one dimensional steady state stratospheric aerosol model is developed that considers the subsequent perturbations caused by including the expected space shuttle particulate effluents. Two approaches to the basic modeling effort were made: in one, enough simplifying assumptions were introduced so that a more or less exact solution to the descriptive equations could be obtained; in the other approach very few simplifications were made and a computer technique was used to solve the equations. The most complex form of the model contains the effects of sedimentation, diffusion, particle growth and coagulation. Results of the perturbation calculations show that there will probably be an immeasurably small increase in the stratospheric aerosol concentration for particles larger than about 0.15 micrometer radius.

  20. Modeling of Nonlinear Dynamics of a Powered Paraglider

    NASA Astrophysics Data System (ADS)

    Watanabe, Masahito; Ochi, Yoshimasa

    This paper presents a nonlinear dynamic model of a powered paraglider (PPG). The PPG is composed of a canopy and a payload with a propelling unit. The canopy is connected with the payload at two points. The model has been derived as a state vector equation under the assumption that the canopy has six degrees of freedom (DOF) and the payload has two DOF of pitching and yawing motions relative to the canopy. Friction at the connecting points between the canopy and the payload is taken into account. Time responses of the PPG without thrust have been computed using the model and the results are compared with flight experiment data. Simulation of a level flight with thrust has also been conducted.

  1. Robust Representation of Integrated Surface-subsurface Hydrology at Watershed Scales

    NASA Astrophysics Data System (ADS)

    Painter, S. L.; Tang, G.; Collier, N.; Jan, A.; Karra, S.

    2015-12-01

    A representation of integrated surface-subsurface hydrology is the central component to process-rich watershed models that are emerging as alternatives to traditional reduced complexity models. These physically based systems are important for assessing potential impacts of climate change and human activities on groundwater-dependent ecosystems and water supply and quality. Integrated surface-subsurface models typically couple three-dimensional solutions for variably saturated flow in the subsurface with the kinematic- or diffusion-wave equation for surface flows. The computational scheme for coupling the surface and subsurface systems is key to the robustness, computational performance, and ease-of-implementation of the integrated system. A new, robust approach for coupling the subsurface and surface systems is developed from the assumption that the vertical gradient in head is negligible at the surface. This tight-coupling assumption allows the surface flow system to be incorporated directly into the subsurface system; effects of surface flow and surface water accumulation are represented as modifications to the subsurface flow and accumulation terms but are not triggered until the subsurface pressure reaches a threshold value corresponding to the appearance of water on the surface. The new approach has been implemented in the highly parallel PFLOTRAN (www.pflotran.org) code. Several synthetic examples and three-dimensional examples from the Walker Branch Watershed in Oak Ridge TN demonstrate the utility and robustness of the new approach using unstructured computational meshes. Representation of solute transport in the new approach is also discussed. Notice: This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for the United States Government purposes.

  2. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  3. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  4. The influence of fiber orientation on the equilibrium properties of neutral and charged biphasic tissues.

    PubMed

    Nagel, Thomas; Kelly, Daniel J

    2010-11-01

    Constitutive models facilitate investigation into load bearing mechanisms of biological tissues and may aid attempts to engineer tissue replacements. In soft tissue models, a commonly made assumption is that collagen fibers can only bear tensile loads. Previous computational studies have demonstrated that radially aligned fibers stiffen a material in unconfined compression most by limiting lateral expansion while vertically aligned fibers buckle under the compressive loads. In this short communication, we show that in conjunction with swelling, these intuitive statements can be violated at small strains. Under such conditions, a tissue with fibers aligned parallel to the direction of load initially provides the greatest resistance to compression. The results are further put into the context of a Benninghoff architecture for articular cartilage. The predictions of this computational study demonstrate the effects of varying fiber orientations and an initial tare strain on the apparent material parameters obtained from unconfined compression tests of charged tissues.

  5. FMRI group analysis combining effect estimates and their variances

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.

    2012-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach practical. We recommend its use in lieu of the less accurate approach in the conventional group analysis. PMID:22245637

  6. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaurov, Alexander A., E-mail: kaurov@uchicago.edu

    The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less

  8. SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?

    NASA Astrophysics Data System (ADS)

    Rührmair, Ulrich

    This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.

  9. An approximation method for improving dynamic network model fitting.

    PubMed

    Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M

    There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.

  10. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    NASA Technical Reports Server (NTRS)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.; Vonfrese, R. R. B.

    1985-01-01

    Flat-Earth modeling is a desirable alternative to the complex spherical-Earth modeling process. These methods were compared using 2 1/2 dimensional flat-earth and spherical modeling to compute gravity and scalar magnetic anomalies along profiles perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Comparison was achieved with percent error computations (spherical-flat/spherical) at critical anomaly points. At the peak gravity anomaly value, errors are less than + or - 5% for all prisms. At 1/2 and 1/10 of the peak, errors are generally less than 10% and 40% respectively, increasing to these values with longer and wider prisms at higher altitudes. For magnetics, the errors at critical anomaly points are less than -10% for all prisms, attaining these magnitudes with longer and wider prisms at higher altitudes. In general, in both gravity and magnetic modeling, errors increase greatly for prisms wider than 500 km, although gravity modeling is more sensitive than magnetic modeling to spherical-Earth effects. Preliminary modeling of both satellite gravity and magnetic anomalies using flat-Earth assumptions is justified considering the errors caused by uncertainties in isolating anomalies.

  11. Analysis of Computational Models of Shaped Charges for Jet Formation and Penetration

    NASA Astrophysics Data System (ADS)

    Haefner, Jonah; Ferguson, Jim

    2016-11-01

    Shaped charges came into use during the Second World War demonstrating the immense penetration power of explosively formed projectiles and since has become a tool used by nearly every nation in the world. Penetration is critically dependent on how the metal liner is collapsed into a jet. The theory of jet formation has been studied in depth since the late 1940s, based on simple models that neglect the strength and compressibility of the metal liner. Although attempts have been made to improve these models, simplifying assumptions limit the understanding of how the material properties affect the jet formation. With a wide range of material and strength models available for simulation, a validation study was necessary to guide code users in choosing models for shaped charge simulations. Using PAGOSA, a finite-volume Eulerian hydrocode designed to model hypervelocity materials and strong shock waves developed by Los Alamos National Laboratory, and experimental data, we investigated the effects of various equations of state and material strength models on jet formation and penetration of a steel target. Comparing PAGOSA simulations against modern experimental data, we analyzed the strengths and weaknesses of available computational models. LA-UR-16-25639 Los Alamos National Laboratory.

  12. The role of finite displacements in vocal fold modeling.

    PubMed

    Chang, Siyuan; Tian, Fang-Bao; Luo, Haoxiang; Doyle, James F; Rousseau, Bernard

    2013-11-01

    Human vocal folds experience flow-induced vibrations during phonation. In previous computational models, the vocal fold dynamics has been treated with linear elasticity theory in which both the strain and the displacement of the tissue are assumed to be infinitesimal (referred to as model I). The effect of the nonlinear strain, or geometric nonlinearity, caused by finite displacements is yet not clear. In this work, a two-dimensional model is used to study the effect of geometric nonlinearity (referred to as model II) on the vocal fold and the airflow. The result shows that even though the deformation is under 1 mm, i.e., less than 10% of the size of the vocal fold, the geometric nonlinear effect is still significant. Specifically, model I underpredicts the gap width, the flow rate, and the impact stress on the medial surfaces as compared to model II. The study further shows that the differences are caused by the contact mechanics and, more importantly, the fluid-structure interaction that magnifies the error from the small-displacement assumption. The results suggest that using the large-displacement formulation in a computational model would be more appropriate for accurate simulations of the vocal fold dynamics.

  13. Coupling fast fluid dynamics and multizone airflow models in Modelica Buildings library to simulate the dynamics of HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Wei; Sevilla, Thomas Alonso; Zuo, Wangda

    Historically, multizone models are widely used in building airflow and energy performance simulations due to their fast computing speed. However, multizone models assume that the air in a room is well mixed, consequently limiting their application. In specific rooms where this assumption fails, the use of computational fluid dynamics (CFD) models may be an alternative option. Previous research has mainly focused on coupling CFD models and multizone models to study airflow in large spaces. While significant, most of these analyses did not consider the coupled simulation of the building airflow with the building's Heating, Ventilation, and Air-Conditioning (HVAC) systems. Thismore » paper tries to fill the gap by integrating the models for HVAC systems with coupled multizone and CFD simulations for airflows, using the Modelica simul ation platform. To improve the computational efficiency, we incorporated a simplified CFD model named fast fluid dynamics (FFD). We first introduce the data synchronization strategy and implementation in Modelica. Then, we verify the implementation using two case studies involving an isothermal and a non-isothermal flow by comparing model simulations to experiment data. Afterward, we study another three cases that are deemed more realistic. This is done by attaching a variable air volume (VAV) terminal box and a VAV system to previous flows to assess the capability of the models in studying the dynamic control of HVAC systems. Finally, we discuss further research needs on the coupled simulation using the models.« less

  14. How Do Tissues Respond and Adapt to Stresses Around a Prosthesis? A Primer on Finite Element Stress Analysis for Orthopaedic Surgeons

    PubMed Central

    Brand, Richard A; Stanford, Clark M; Swan, Colby C

    2003-01-01

    Joint implant design clearly affects long-term outcome. While many implant designs have been empirically-based, finite element analysis has the potential to identify beneficial and deleterious features prior to clinical trials. Finite element analysis is a powerful analytic tool allowing computation of the stress and strain distribution throughout an implant construct. Whether it is useful depends upon many assumptions and details of the model. Since ultimate failure is related to biological factors in addition to mechanical, and since the mechanical causes of failure are related to load history, rather than a few loading conditions, chief among them is whether the stresses or strains under limited loading conditions relate to outcome. Newer approaches can minimize this and the many other model limitations. If the surgeon is to critically and properly interpret the results in scientific articles and sales literature, he or she must have a fundamental understanding of finite element analysis. We outline here the major capabilities of finite element analysis, as well as the assumptions and limitations. PMID:14575244

  15. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  16. Design verification of SIFT

    NASA Technical Reports Server (NTRS)

    Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard

    1987-01-01

    A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.

  17. Evidences of trapping in tungsten and implications for plasma-facing components

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.; Anderl, R. A.; Holland, D. F.

    Trapping effects that include significant delays in permeation saturation, abrupt changes in permeation rate associated with temperature changes, and larger than expected inventories of hydrogen isotopes in the material, were seen in implantation-driven permeation experiments using 25- and 50-micron thick tungsten foils at temperatures of 638 to 825 K. Computer models that simulate permeation transients reproduce the steady-state permeation and reemission behavior of these experiments with expected values of material parameters. However, the transient time characteristics were not successfully simulated without the assumption of traps of substantial trap energy and concentration. An analytical model based on the assumptions of thermodynamic equilibrium between trapped hydrogen atoms and a comparatively low mobile atom concentration successfully accounts for the observed behavior. Using steady-state and transient permeation data from experiments at different temperatures, the effective trap binding energy may be inferred. We analyze a tungsten coated divertor plate design representative of those proposed for ITER and ARIES and consider the implications for tritium permeation and retention if the same trapping we observed was present in that tungsten. Inventory increases of several orders of magnitude may result.

  18. ASSIST user manual

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.; Boerschlein, David P.

    1995-01-01

    Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.

  19. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  20. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  1. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  2. Role of suprathermal electrons during nanosecond laser energy deposit in fused silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grua, P.; Hébert, D.; Lamaignère, L.

    2014-08-25

    An accurate description of interaction between a nanosecond laser pulse and a wide band gap dielectric, such as fused silica, requires the understanding of energy deposit induced by temperature changes occurring in the material. In order to identify the fundamental processes involved in laser-matter interaction, we have used a 1D computational model that allows us to describe a wide set of physical mechanisms and intended for comparison with specially designed “1D experiments.” We have pointed out that suprathermal electrons are very likely implicated in heat conduction, and this assumption has allowed the model to reproduce the experiments.

  3. Modeling the thickness dependence of the magnetic phase transition temperature in thin FeRh films

    NASA Astrophysics Data System (ADS)

    Ostler, Thomas Andrew; Barton, Craig; Thomson, Thomas; Hrkac, Gino

    2017-02-01

    FeRh and its first-order phase transition can open new routes for magnetic hybrid materials and devices under the assumption that it can be exploited in ultra-thin-film structures. Motivated by experimental measurements showing an unexpected increase in the phase transition temperature with decreasing thickness of FeRh on top of MgO, we develop a computational model to investigate strain effects of FeRh in such magnetic structures. Our theoretical results show that the presence of the MgO interface results in a strain that changes the magnetic configuration which drives the anomalous behavior.

  4. Signalling and obfuscation for congestion control

    NASA Astrophysics Data System (ADS)

    Mareček, Jakub; Shorten, Robert; Yu, Jia Yuan

    2015-10-01

    We aim to reduce the social cost of congestion in many smart city applications. In our model of congestion, agents interact over limited resources after receiving signals from a central agent that observes the state of congestion in real time. Under natural models of agent populations, we develop new signalling schemes and show that by introducing a non-trivial amount of uncertainty in the signals, we reduce the social cost of congestion, i.e., improve social welfare. The signalling schemes are efficient in terms of both communication and computation, and are consistent with past observations of the congestion. Moreover, the resulting population dynamics converge under reasonable assumptions.

  5. Photodarkening kinetics in a high-power YDFA versus CW or short-pulse seed conditions

    NASA Astrophysics Data System (ADS)

    Jolly, Alain; Vinçont, Cyril; Boullet, Johan

    2017-02-01

    We propose an innovating model to describe the kinetics of competing photo-darkening and photo-bleaching phenomena in high-power, Ytterbium-Doped-Fibre-Amplifiers. This model makes use of aggregated species of trivalent Ytterbium and divalent ions, which operate as primarily efficient color-centers. This ensures multi-photon excitation, partly from the pump and partly from the signal. The fit of numerical computations with dedicated experiments help to validate our theoretical assumptions, in the definition of the involved physics. Potential applications of this study include further discussions for the selection of processing options with fibre-manufacturers and the optimization of operating conditions.

  6. Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca

    2017-06-01

    Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.

  7. Development of a New Methodology for Computing Surface Sensible Heat Fluxes using Thermal Imagery

    NASA Astrophysics Data System (ADS)

    Morrison, T. J.; Calaf, M.; Fernando, H. J.; Price, T. A.; Pardyjak, E.

    2017-12-01

    Current numerical weather predication models utilize similarity to characterize momentum, moisture, and heat fluxes. Such formulations are only valid under the ideal assumptions of spatial homogeneity, statistical stationary, and zero subsidence. However, recent surface temperature measurements from the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program on the Salt Flats of Utah's West desert, show that even under the most a priori ideal conditions, heterogeneity of the aforementioned variables exists. We present a new method to extract spatially-distributed measurements of surface sensible heat flux from thermal imagery. The approach consists of using a surface energy budget, where the ground heat flux is easily computed from limited measurements using a force-restore-type methodology, the latent heat fluxes are neglected, and the energy storage is computed using a lumped capacitance model. Preliminary validation of the method is presented using experimental data acquired from a nearby sonic anemometer during the MATERHORN campaign. Additional evaluation is required to confirm the method's validity. Further decomposition analysis of on-site instrumentation (thermal camera, cold-hotwire probes, and sonic anemometers) using Proper Orthogonal Decomposition (POD), and wavelet analysis, reveals time scale similarity between the flow and surface fluctuations.

  8. Computer Majors' Education as Moral Enterprise: A Durkheimian Analysis.

    ERIC Educational Resources Information Center

    Rigoni, David P.; Lamagdeleine, Donald R.

    1998-01-01

    Building on Durkheim's (Emile) emphasis on the moral dimensions of social reality and using it to explore contemporary computer education, contends that many of his claims are justified. Argues that the college computer department has created a set of images, maxims, and operating assumptions that frames its curriculum, courses, and student…

  9. The Relationship between Computational Fluency and Student Success in General Studies Mathematics

    ERIC Educational Resources Information Center

    Hegeman, Jennifer; Waters, Gavin

    2012-01-01

    Many developmental mathematics programs emphasize computational fluency with the assumption that this is a necessary contributor to student success in general studies mathematics. In an effort to determine which skills are most essential, scores on a computational fluency test were correlated with student success in general studies mathematics at…

  10. Learning Styles and Computers.

    ERIC Educational Resources Information Center

    Geisert, Gene; Dunn, Rita

    Although the use of computers in the classroom has been heralded as a major breakthrough in education, many educators have yet to use computers to their fullest advantage. This is perhaps due to the traditional assumption that students differed only in their speed of learning. However, new research indicates that students differ in their style of…

  11. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    PubMed

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  12. Integrated cosmological probes: concordance quantified

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less

  13. A toolbox and record for scientific models

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.

  14. Model-free and model-based reward prediction errors in EEG.

    PubMed

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops

    NASA Astrophysics Data System (ADS)

    Sharma, Vikrant

    2017-01-01

    The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.

  16. Mathematical Modeling: Are Prior Experiences Important?

    ERIC Educational Resources Information Center

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…

  17. Assumptions to the Annual Energy Outlook

    EIA Publications

    2017-01-01

    This report presents the major assumptions of the National Energy Modeling System (NEMS) used to generate the projections in the Annual Energy Outlook, including general features of the model structure, assumptions concerning energy markets, and the key input data and parameters that are the most significant in formulating the model results.

  18. Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    NASA Technical Reports Server (NTRS)

    Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. V.; Yerazunis, S. W.

    1973-01-01

    Problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars are reported. Problem areas include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis, terrain modeling and path selection; and chemical analysis of specimens. These tasks are summarized: vehicle model design, mathematical model of vehicle dynamics, experimental vehicle dynamics, obstacle negotiation, electrochemical controls, remote control, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, and chromatograph model evaluation and improvement.

  19. Disease Extinction Versus Persistence in Discrete-Time Epidemic Models.

    PubMed

    van den Driessche, P; Yakubu, Abdul-Aziz

    2018-04-12

    We focus on discrete-time infectious disease models in populations that are governed by constant, geometric, Beverton-Holt or Ricker demographic equations, and give a method for computing the basic reproduction number, [Formula: see text]. When [Formula: see text] and the demographic population dynamics are asymptotically constant or under geometric growth (non-oscillatory), we prove global asymptotic stability of the disease-free equilibrium of the disease models. Under the same demographic assumption, when [Formula: see text], we prove uniform persistence of the disease. We apply our theoretical results to specific discrete-time epidemic models that are formulated for SEIR infections, cholera in humans and anthrax in animals. Our simulations show that a unique endemic equilibrium of each of the three specific disease models is asymptotically stable whenever [Formula: see text].

  20. Protocols for efficient simulations of long-time protein dynamics using coarse-grained CABS model.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2014-01-01

    Coarse-grained (CG) modeling is a well-acknowledged simulation approach for getting insight into long-time scale protein folding events at reasonable computational cost. Depending on the design of a CG model, the simulation protocols vary from highly case-specific-requiring user-defined assumptions about the folding scenario-to more sophisticated blind prediction methods for which only a protein sequence is required. Here we describe the framework protocol for the simulations of long-term dynamics of globular proteins, with the use of the CABS CG protein model and sequence data. The simulations can start from a random or a selected (e.g., native) structure. The described protocol has been validated using experimental data for protein folding model systems-the prediction results agreed well with the experimental results.

  1. Model documentation report: Residential sector demand module of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less

  2. Critical frontier of the Potts and percolation models on triangular-type and kagome-type lattices. II. Numerical analysis

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Fu, Zhe; Guo, Wenan; Wu, F. Y.

    2010-06-01

    In the preceding paper, one of us (F. Y. Wu) considered the Potts model and bond and site percolation on two general classes of two-dimensional lattices, the triangular-type and kagome-type lattices, and obtained closed-form expressions for the critical frontier with applications to various lattice models. For the triangular-type lattices Wu’s result is exact, and for the kagome-type lattices Wu’s expression is under a homogeneity assumption. The purpose of the present paper is twofold: First, an essential step in Wu’s analysis is the derivation of lattice-dependent constants A,B,C for various lattice models, a process which can be tedious. We present here a derivation of these constants for subnet networks using a computer algorithm. Second, by means of a finite-size scaling analysis based on numerical transfer matrix calculations, we deduce critical properties and critical thresholds of various models and assess the accuracy of the homogeneity assumption. Specifically, we analyze the q -state Potts model and the bond percolation on the 3-12 and kagome-type subnet lattices (n×n):(n×n) , n≤4 , for which the exact solution is not known. Our numerical determination of critical properties such as conformal anomaly and magnetic correlation length verifies that the universality principle holds. To calibrate the accuracy of the finite-size procedure, we apply the same numerical analysis to models for which the exact critical frontiers are known. The comparison of numerical and exact results shows that our numerical values are correct within errors of our finite-size analysis, which correspond to 7 or 8 significant digits. This in turn infers that the homogeneity assumption determines critical frontiers with an accuracy of 5 decimal places or higher. Finally, we also obtained the exact percolation thresholds for site percolation on kagome-type subnet lattices (1×1):(n×n) for 1≤n≤6 .

  3. The influence of patellofemoral joint contact geometry on the modeling of three dimensional patellofemoral joint forces.

    PubMed

    Powers, Christopher M; Chen, Yu-Jen; Scher, Irving; Lee, Thay Q

    2006-01-01

    The purpose of this study was to determine the influence of patellofemoral joint contact geometry on the modeling of three-dimensional patellofemoral joint forces. To achieve this goal, patellofemoral joint reaction forces (PFJRFs) that were measured from an in-vitro cadaveric set-up were compared to PFJRFs estimated from a computer model that did not consider patellofemoral joint contact geometry. Ten cadaver knees were used in this study. Each was mounted on a custom jig that was fixed to an Instron frame. Quadriceps muscle loads were accomplished using a pulley system and weights. The force in the patellar ligament was obtained using a buckle transducer. To quantify the magnitude and direction of the PFJRF, a six-axis load cell was incorporated into the femoral fixation system so that a rigid body assumption could be made. PFJRF data were obtained at 0 degrees , 20 degrees , 40 degrees and 60 degrees of knee flexion. Following in vitro testing, SIMM modeling software was used to develop computational models based on the three-dimensional coordinates (Microscribe digitizer) of individual muscle and patellar ligament force vectors obtained from the cadaver knees. The overall magnitude of the PFJRF estimated from the computer generated models closely matched the direct measurements from the in vitro set-up (Pearson's correlation coefficient, R(2)=0.91, p<0.001). Although the computational model accurately estimated the posteriorly directed forces acting on the joint, some discrepancies were noted in the forces acting in the superior and lateral directions. These differences however, were relatively small when expressed as a total of the overall PFJRF magnitude.

  4. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to determine the contact patch dimensions.

  5. Area, length and thickness conservation: Dogma or reality?

    NASA Astrophysics Data System (ADS)

    Moretti, Isabelle; Callot, Jean Paul

    2012-08-01

    The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.

  6. Predicting solar radiation based on available weather indicators

    NASA Astrophysics Data System (ADS)

    Sauer, Frank Joseph

    Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.

  7. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    NASA Astrophysics Data System (ADS)

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  8. Predicting field-scale dispersion under realistic conditions with the polar Markovian velocity process model

    NASA Astrophysics Data System (ADS)

    Dünser, Simon; Meyer, Daniel W.

    2016-06-01

    In most groundwater aquifers, dispersion of tracers is dominated by flow-field inhomogeneities resulting from the underlying heterogeneous conductivity or transmissivity field. This effect is referred to as macrodispersion. Since in practice, besides a few point measurements the complete conductivity field is virtually never available, a probabilistic treatment is needed. To quantify the uncertainty in tracer concentrations from a given geostatistical model for the conductivity, Monte Carlo (MC) simulation is typically used. To avoid the excessive computational costs of MC, the polar Markovian velocity process (PMVP) model was recently introduced delivering predictions at about three orders of magnitude smaller computing times. In artificial test cases, the PMVP model has provided good results in comparison with MC. In this study, we further validate the model in a more challenging and realistic setup. The setup considered is derived from the well-known benchmark macrodispersion experiment (MADE), which is highly heterogeneous and non-stationary with a large number of unevenly scattered conductivity measurements. Validations were done against reference MC and good overall agreement was found. Moreover, simulations of a simplified setup with a single measurement were conducted in order to reassess the model's most fundamental assumptions and to provide guidance for model improvements.

  9. Analytical modelling of temperature effects on an AMPA-type synapse.

    PubMed

    Kufel, Dominik S; Wojcik, Grzegorz M

    2018-05-11

    It was previously reported, that temperature may significantly influence neural dynamics on the different levels of brain function. Thus, in computational neuroscience, it would be useful to make models scalable for a wide range of various brain temperatures. However, lack of experimental data and an absence of temperature-dependent analytical models of synaptic conductance does not allow to include temperature effects at the multi-neuron modeling level. In this paper, we propose a first step to deal with this problem: A new analytical model of AMPA-type synaptic conductance, which is able to incorporate temperature effects in low-frequency stimulations. It was constructed based on Markov model description of AMPA receptor kinetics using the set of coupled ODEs. The closed-form solution for the set of differential equations was found using uncoupling assumption (introduced in the paper) with few simplifications motivated both from experimental data and from Monte Carlo simulation of synaptic transmission. The model may be used for computationally efficient and biologically accurate implementation of temperature effects on AMPA receptor conductance in large-scale neural network simulations. As a result, it may open a wide range of new possibilities for researching the influence of temperature on certain aspects of brain functioning.

  10. Role of mechanical factors in cortical folding development

    NASA Astrophysics Data System (ADS)

    Razavi, Mir Jalil; Zhang, Tuo; Li, Xiao; Liu, Tianming; Wang, Xianqiao

    2015-09-01

    Deciphering mysteries of the structure-function relationship in cortical folding has emerged as the cynosure of recent research on brain. Understanding the mechanism of convolution patterns can provide useful insight into the normal and pathological brain function. However, despite decades of speculation and endeavors the underlying mechanism of the brain folding process remains poorly understood. This paper focuses on the three-dimensional morphological patterns of a developing brain under different tissue specification assumptions via theoretical analyses, computational modeling, and experiment verifications. The living human brain is modeled with a soft structure having outer cortex and inner core to investigate the brain development. Analytical interpretations of differential growth of the brain model provide preliminary insight into the critical growth ratio for instability and crease formation of the developing brain followed by computational modeling as a way to offer clues for brain's postbuckling morphology. Especially, tissue geometry, growth ratio, and material properties of the cortex are explored as the most determinant parameters to control the morphogenesis of a growing brain model. As indicated in results, compressive residual stresses caused by the sufficient growth trigger instability and the brain forms highly convoluted patterns wherein its gyrification degree is specified with the cortex thickness. Morphological patterns of the developing brain predicted from the computational modeling are consistent with our neuroimaging observations, thereby clarifying, in part, the reason of some classical malformation in a developing brain.

  11. Efficient Computation of Info-Gap Robustness for Finite Element Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less

  12. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940

  13. Spore: Spawning Evolutionary Misconceptions?

    NASA Astrophysics Data System (ADS)

    Bean, Thomas E.; Sinatra, Gale M.; Schrader, P. G.

    2010-10-01

    The use of computer simulations as educational tools may afford the means to develop understanding of evolution as a natural, emergent, and decentralized process. However, special consideration of developmental constraints on learning may be necessary when using these technologies. Specifically, the essentialist (biological forms possess an immutable essence), teleological (assignment of purpose to living things and/or parts of living things that may not be purposeful), and intentionality (assumption that events are caused by an intelligent agent) biases may be reinforced through the use of computer simulations, rather than addressed with instruction. We examine the video game Spore for its depiction of evolutionary content and its potential to reinforce these cognitive biases. In particular, we discuss three pedagogical strategies to mitigate weaknesses of Spore and other computer simulations: directly targeting misconceptions through refutational approaches, targeting specific principles of scientific inquiry, and directly addressing issues related to models as cognitive tools.

  14. Density functional computational studies on the glucose and glycine Maillard reaction: Formation of the Amadori rearrangement products

    NASA Astrophysics Data System (ADS)

    Jalbout, Abraham F.; Roy, Amlan K.; Shipar, Abul Haider; Ahmed, M. Samsuddin

    Theoretical energy changes of various intermediates leading to the formation of the Amadori rearrangement products (ARPs) under different mechanistic assumptions have been calculated, by using open chain glucose (O-Glu)/closed chain glucose (A-Glu and B-Glu) and glycine (Gly) as a model for the Maillard reaction. Density functional theory (DFT) computations have been applied on the proposed mechanisms under different pH conditions. Thus, the possibility of the formation of different compounds and electronic energy changes for different steps in the proposed mechanisms has been evaluated. B-Glu has been found to be more efficient than A-Glu, and A-Glu has been found more efficient than O-Glu in the reaction. The reaction under basic condition is the most favorable for the formation of ARPs. Other reaction pathways have been computed and discussed in this work.0

  15. Modeling the fusion of cylindrical bioink particles in post bioprinting structure formation

    NASA Astrophysics Data System (ADS)

    McCune, Matt; Shafiee, Ashkan; Forgacs, Gabor; Kosztin, Ioan

    2015-03-01

    Cellular Particle Dynamics (CPD) is an effective computational method to describe the shape evolution and biomechanical relaxation processes in multicellular systems. Thus, CPD is a useful tool to predict the outcome of post-printing structure formation in bioprinting. The predictive power of CPD has been demonstrated for multicellular systems composed of spherical bioink units. Experiments and computer simulations were related through an independently developed theoretical formalism based on continuum mechanics. Here we generalize the CPD formalism to (i) include cylindrical bioink particles often used in specific bioprinting applications, (ii) describe the more realistic experimental situation in which both the length and the volume of the cylindrical bioink units decrease during post-printing structure formation, and (iii) directly connect CPD simulations to the corresponding experiments without the need of the intermediate continuum theory inherently based on simplifying assumptions. Work supported by NSF [PHY-0957914]. Computer time provided by the University of Missouri Bioinformatics Consortium.

  16. A priori and a posteriori analyses of the flamelet/progress variable approach for supersonic combustion

    NASA Astrophysics Data System (ADS)

    Saghafian, Amirreza; Pitsch, Heinz

    2012-11-01

    A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.

  17. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  18. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  19. Development of state and transition model assumptions used in National Forest Plan revision

    Treesearch

    Eric B. Henderson

    2008-01-01

    State and transition models are being utilized in forest management analysis processes to evaluate assumptions about disturbances and succession. These models assume valid information about seral class successional pathways and timing. The Forest Vegetation Simulator (FVS) was used to evaluate seral class succession assumptions for the Hiawatha National Forest in...

  20. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

Top