Sample records for simple additive model

  1. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  2. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  3. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  4. A simple dynamic engine model for use in a real-time aircraft simulation with thrust vectoring

    NASA Technical Reports Server (NTRS)

    Johnson, Steven A.

    1990-01-01

    A simple dynamic engine model was developed at the NASA Ames Research Center, Dryden Flight Research Facility, for use in thrust vectoring control law development and real-time aircraft simulation. The simple dynamic engine model of the F404-GE-400 engine (General Electric, Lynn, Massachusetts) operates within the aircraft simulator. It was developed using tabular data generated from a complete nonlinear dynamic engine model supplied by the manufacturer. Engine dynamics were simulated using a throttle rate limiter and low-pass filter. Included is a description of a method to account for axial thrust loss resulting from thrust vectoring. In addition, the development of the simple dynamic engine model and its incorporation into the F-18 high alpha research vehicle (HARV) thrust vectoring simulation. The simple dynamic engine model was evaluated at Mach 0.2, 35,000 ft altitude and at Mach 0.7, 35,000 ft altitude. The simple dynamic engine model is within 3 percent of the steady state response, and within 25 percent of the transient response of the complete nonlinear dynamic engine model.

  5. 40 CFR 80.42 - Simple emissions model.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (as measured under § 80.46) ETOH = Oxygen content of the fuel in question in the form of ethanol, in...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.42 Simple emissions model. (a) VOC... VOC emissions from the fuel in question, in grams per mile, for VOC control region 1 during the summer...

  6. 40 CFR 80.42 - Simple emissions model.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (as measured under § 80.46) ETOH = Oxygen content of the fuel in question in the form of ethanol, in...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.42 Simple emissions model. (a) VOC... VOC emissions from the fuel in question, in grams per mile, for VOC control region 1 during the summer...

  7. 40 CFR 80.42 - Simple emissions model.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (as measured under § 80.46) ETOH = Oxygen content of the fuel in question in the form of ethanol, in...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.42 Simple emissions model. (a) VOC... VOC emissions from the fuel in question, in grams per mile, for VOC control region 1 during the summer...

  8. Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model

    NASA Astrophysics Data System (ADS)

    Wilton, Darren C.

    The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).

  9. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. "Compacted" procedures for adults' simple addition: A review and critique of the evidence.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2018-04-01

    We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.

  11. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Energy economy in the actomyosin interaction: lessons from simple models.

    PubMed

    Lehman, Steven L

    2010-01-01

    The energy economy of the actomyosin interaction in skeletal muscle is both scientifically fascinating and practically important. This chapter demonstrates how simple cross-bridge models have guided research regarding the energy economy of skeletal muscle. Parameter variation on a very simple two-state strain-dependent model shows that early events in the actomyosin interaction strongly influence energy efficiency, and late events determine maximum shortening velocity. Addition of a weakly-bound state preceding force production allows weak coupling of cross-bridge mechanics and ATP turnover, so that a simple three-state model can simulate the velocity-dependence of ATP turnover. Consideration of the limitations of this model leads to a review of recent evidence regarding the relationship between ligand binding states, conformational states, and macromolecular structures of myosin cross-bridges. Investigation of the fine structure of the actomyosin interaction during the working stroke continues to inform fundamental research regarding the energy economy of striated muscle.

  13. Including resonances in the multiperipheral model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, S.S.; Snider, D.R.; Thomas, G.H.

    1973-10-01

    A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)

  14. Initiation and Modification of Reaction by Energy Addition: Kinetic and Transport Phenomena

    DTIC Science & Technology

    1990-10-01

    ignition- delay time ranges from about 2 to 100 ps. The results of a computer- modeling calcu- lation of the chemical kinetics suggest that the...Page PROGRAM INFORMATION iii 1.0 RESEARCH OBJECTIVES 2.0 ANALYSIS 2 3.0 EXPERIMENT 7 REFERENCES 8 APPENDIX I. Evaluating a Simple Model for Laminar...Flame-Propagation I-1 Rates. I. Planar Geometry. APPENDIX II. Evaluating a Simple Model for Laminar-Flame-Propagation II-1 Rates. II. Spherical

  15. Simple mental addition in children with and without mild mental retardation.

    PubMed

    Janssen, R; De Boeck, P; Viaene, M; Vallaeys, L

    1999-11-01

    The speeded performance on simple mental addition problems of 6- and 7-year-old children with and without mild mental retardation is modeled from a person perspective and an item perspective. On the person side, it was found that a single cognitive dimension spanned the performance differences between the two ability groups. However, a discontinuity, or "jump," was observed in the performance of the normal ability group on the easier items. On the item side, the addition problems were almost perfectly ordered in difficulty according to their problem size. Differences in difficulty were explained by factors related to the difficulty of executing nonretrieval strategies. All findings were interpreted within the framework of Siegler's (e.g., R. S. Siegler & C. Shipley, 1995) model of children's strategy choices in arithmetic. Models from item response theory were used to test the hypotheses. Copyright 1999 Academic Press.

  16. A Simple Relativistic Bohr Atom

    ERIC Educational Resources Information Center

    Terzis, Andreas F.

    2008-01-01

    A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…

  17. Inexpensive Laboratory Model with Many Applications.

    ERIC Educational Resources Information Center

    Archbold, Norbert L.; Johnson, Robert E.

    1987-01-01

    Presents a simple, inexpensive and realistic model which allows introductory geology students to obtain subsurface information through a simulated drilling experience. Offers ideas on additional applications to a variety of geologic situations. (ML)

  18. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  19. Algebraic Turbulence-Chemistry Interaction Model

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.

    2012-01-01

    The results of a series of Perfectly Stirred Reactor (PSR) and Partially Stirred Reactor (PaSR) simulations are compared to each other over a wide range of operating conditions. It is found that the PaSR results can be simulated by a PSR solution with just an adjusted chemical reaction rate. A simple expression has been developed that gives the required change in reaction rate for a PSR solution to simulate the PaSR results. This expression is the basis of a simple turbulence-chemistry interaction model. The interaction model that has been developed is intended for use with simple one-step global reaction mechanisms and for steady-state flow simulations. Due to the simplicity of the model there is very little additional computational cost in adding it to existing CFD codes.

  20. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  1. Analysis of bacterial migration. 2: Studies with multiple attractant gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, I.; Frymier, P.D.; Hahn, C.M.

    1995-02-01

    Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less

  2. Chemical Defects, Electronic Structure, and Transport in N-type and P-type Organic Semiconductors: First Principles Theory

    DTIC Science & Technology

    2012-11-29

    of localized states extending into the gap. We also introduced a simple model allowing estimates of the upper limit of the intra-grain mobility in...well as to pentacene , and DATT. This research will be described below. In addition to our work on the electronic structure and charge mobility, we have...stacking distance gives rise to a tail of localized states which act as traps for electrons and holes. We introduced a simple effective Hamiltonian model

  3. Collaborative Research: failure of RockMasses from Nucleation and Growth of Microscopic Defects and Disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, William

    Over the 21 years of funding we have pursued several projects related to earthquakes, damage and nucleation. We developed simple models of earthquake faults which we studied to understand Gutenburg-Richter scaling, foreshocks and aftershocks, the effect of spatial structure of the faults and its interaction with underlying self organization and phase transitions. In addition we studied the formation of amorphous solids via the glass transition. We have also studied nucleation with a particular concentration on transitions in systems with a spatial symmetry change. In addition we investigated the nucleation process in models that mimic rock masses. We obtained the structuremore » of the droplet in both homogeneous and heterogeneous nucleation. We also investigated the effect of defects or asperities on the nucleation of failure in simple models of earthquake faults.« less

  4. The time-dependent response of 3- and 5-layer sandwich beams

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.

    1992-01-01

    Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.

  5. Operator priming and generalization of practice in adults' simple arithmetic.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2016-04-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).

  6. A simple spatiotemporal rabies model for skunk and bat interaction in northeast Texas.

    PubMed

    Borchering, Rebecca K; Liu, Hao; Steinhaus, Mara C; Gardner, Carl L; Kuang, Yang

    2012-12-07

    We formulate a simple partial differential equation model in an effort to qualitatively reproduce the spread dynamics and spatial pattern of rabies in northeast Texas with overlapping reservoir species (skunks and bats). Most existing models ignore reservoir species or model them with patchy models by ordinary differential equations. In our model, we incorporate interspecies rabies infection in addition to rabid population random movement. We apply this model to the confirmed case data from northeast Texas with most parameter values obtained or computed from the literature. Results of simulations using both our skunk-only model and our skunk and bat model demonstrate that the model with overlapping reservoir species more accurately reproduces the progression of rabies spread in northeast Texas. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  8. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  9. Detonation product EOS studies: Using ISLS to refine CHEETAH

    NASA Astrophysics Data System (ADS)

    Zaug, Joseph; Fried, Larry; Hansen, Donald

    2001-06-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  10. Calibration of a simple and a complex model of global marine biogeochemistry

    NASA Astrophysics Data System (ADS)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  11. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening.

    PubMed

    Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir

    2018-06-01

    Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.

  12. Research on Capacity Addition using Market Model with Transmission Congestion under Competitive Environment

    NASA Astrophysics Data System (ADS)

    Katsura, Yasufumi; Attaviriyanupap, Pathom; Kataoka, Yoshihiko

    In this research, the fundamental premises for deregulation of the electric power industry are reevaluated. The authors develop a simple model to represent wholesale electricity market with highly congested network. The model is developed by simplifying the power system and market in New York ISO based on available data of New York ISO in 2004 with some estimation. Based on the developed model and construction cost data from the past, the economic impact of transmission line addition on market participants and the impact of deregulation on power plant additions under market with transmission congestion are studied. Simulation results show that the market signals may fail to facilitate proper capacity additions and results in the undesirable over-construction and insufficient-construction cycle of capacity addition.

  13. Numerical model of solar dynamic radiator for parametric analysis

    NASA Technical Reports Server (NTRS)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations.

  14. Simple model of foam drainage

    NASA Astrophysics Data System (ADS)

    Fortes, M. A.; Coughlan, S.

    1994-10-01

    A simple model of foam drainage is introduced in which the Plateau borders and quadruple junctions are identified with pools that discharge through channels to pools underneath. The flow is driven by gravity and there are friction losses in the exhausting channels. The equation of Bernoulli combined with the Hagen-Poiseuille equation is applied to describe the flow. The area of the cross section of the exhausting channels can be taken as a constant or may vary during drainage. The predictions of the model are compared with standard drainage curves and with the results of a recently reported experiment in which additional liquid is supplied at the top of the froth.

  15. A simple branching model that reproduces language family and language population distributions

    NASA Astrophysics Data System (ADS)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  16. Oblique Impact Ejecta Flow Fields: An Application of Maxwells Z Model

    NASA Technical Reports Server (NTRS)

    Anderson, J. L. B.; Schultz, P. H.; Heineck, J. T.

    2001-01-01

    Oblique impact flow fields show an evolution from asymmetric to symmetric ejecta flow. This evolution can be put into the simple analytical description of the evolving flow field origin using the Maxwell Z Model. Additional information is contained in the original extended abstract.

  17. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  18. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaug, J M; Howard, W M; Fried, L E

    2001-08-08

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition themore » kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Computational models are systematically improved with each addition of experimental data.« less

  19. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  20. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks.

    PubMed

    Song, H Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks-complex networks characterized by a combination of high clustering and short path lengths-are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  1. Simple, distance-dependent formulation of the Watts-Strogatz model for directed and undirected small-world networks

    NASA Astrophysics Data System (ADS)

    Song, H. Francis; Wang, Xiao-Jing

    2014-12-01

    Small-world networks—complex networks characterized by a combination of high clustering and short path lengths—are widely studied using the paradigmatic model of Watts and Strogatz (WS). Although the WS model is already quite minimal and intuitive, we describe an alternative formulation of the WS model in terms of a distance-dependent probability of connection that further simplifies, both practically and theoretically, the generation of directed and undirected WS-type small-world networks. In addition to highlighting an essential feature of the WS model that has previously been overlooked, namely the equivalence to a simple distance-dependent model, this alternative formulation makes it possible to derive exact expressions for quantities such as the degree and motif distributions and global clustering coefficient for both directed and undirected networks in terms of model parameters.

  2. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  3. Three-Space Interaction in Doubly Sinusoidal Periodic Media

    NASA Astrophysics Data System (ADS)

    Tian-Lin, Dong; Ping, Chen

    2006-06-01

    Three-space-harmonic (3SH) interaction in doubly sinusoidal periodic (DSP) medium is investigated. Associated physical effects such as additional gap, defect state, and indirect gaps, are theoretically and numerically revealed. This simple DSP model can facilitate the understanding and utilizing of a series of effects in rather complicated periodic structures with additional defect or modulation.

  4. Simple stochastic model for El Niño with westerly wind bursts

    PubMed Central

    Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.

    2016-01-01

    Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821

  5. A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow

    NASA Technical Reports Server (NTRS)

    Oseguera, Rosa M.; Bowles, Roland L.

    1988-01-01

    A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.

  6. Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2010-01-01

    A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…

  7. Alternative Tsunami Models

    ERIC Educational Resources Information Center

    Tan, A.; Lyatskaya, I.

    2009-01-01

    The interesting papers by Margaritondo (2005 "Eur. J. Phys." 26 401) and by Helene and Yamashita (2006 "Eur. J. Phys." 27 855) analysed the great Indian Ocean tsunami of 2004 using a simple one-dimensional canal wave model, which was appropriate for undergraduate students in physics and related fields of discipline. In this paper, two additional,…

  8. Comparative system identification of flower tracking performance in three hawkmoth species reveals adaptations for dim light vision.

    PubMed

    Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon

    2017-04-05

    Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  9. Magnetic x-ray scattering studies of holmium using synchro- tron radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibbs, D.; Moncton, D.E.; D'Amico, K.L.

    1985-07-08

    We present the results of magnetic x-ray scattering experiments on the rare-earth metal holmium using synchrotron radiation. Direct high-resolution measurements of the nominally incommensurate magnetic satellite reflections reveal new lock-in behavior which we explain within a simple spin-discommensuration model. As a result of magnetoelastic coupling, the spin-discommensuration array produces additional x-ray diffraction satellites. Their observation further substantiates the model and demonstrates additional advantages of synchrotron radiation for magnetic-structure studies.

  10. Simple rules for a "simple" nervous system? Molecular and biomathematical approaches to enteric nervous system formation and malformation.

    PubMed

    Newgreen, Donald F; Dufour, Sylvie; Howard, Marthe J; Landman, Kerry A

    2013-10-01

    We review morphogenesis of the enteric nervous system from migratory neural crest cells, and defects of this process such as Hirschsprung disease, centering on cell motility and assembly, and cell adhesion and extracellular matrix molecules, along with cell proliferation and growth factors. We then review continuum and agent-based (cellular automata) models with rules of cell movement and logistical proliferation. Both movement and proliferation at the individual cell level are modeled with stochastic components from which stereotyped outcomes emerge at the population level. These models reproduced the wave-like colonization of the intestine by enteric neural crest cells, and several new properties emerged, such as colonization by frontal expansion, which were later confirmed biologically. These models predict a surprising level of clonal heterogeneity both in terms of number and distribution of daughter cells. Biologically, migrating cells form stable chains made up of unstable cells, but this is not seen in the initial model. We outline additional rules for cell differentiation into neurons, axon extension, cell-axon and cell-cell adhesions, chemotaxis and repulsion which can reproduce chain migration. After the migration stage, the cells re-arrange as a network of ganglia. Changes in cell adhesion molecules parallel this, and we describe additional rules based on Steinberg's Differential Adhesion Hypothesis, reflecting changing levels of adhesion in neural crest cells and neurons. This was able to reproduce enteric ganglionation in a model. Mouse mutants with disturbances of enteric nervous system morphogenesis are discussed, and these suggest future refinement of the models. The modeling suggests a relatively simple set of cell behavioral rules could account for complex patterns of morphogenesis. The model has allowed the proposal that Hirschsprung disease is mostly an enteric neural crest cell proliferation defect, not a defect of cell migration. In addition, the model suggests an explanations for zonal and skip segment variants of Hirschsprung disease, and also gives a novel stochastic explanation for the observed discordancy of Hirschsprung disease in identical twins. © 2013 Elsevier Inc. All rights reserved.

  11. A class of simple bouncing and late-time accelerating cosmologies in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Kuiroukidis, A.

    We consider the field equations for a flat FRW cosmological model, given by Eq. (??), in an a priori generic f(R) gravity model and cast them into a, completely normalized and dimensionless, system of ODEs for the scale factor and the function f(R), with respect to the scalar curvature R. It is shown that under reasonable assumptions, namely for power-law functional form for the f(R) gravity model, one can produce simple analytical and numerical solutions describing bouncing cosmological models where in addition there are late-time accelerating. The power-law form for the f(R) gravity model is typically considered in the literature as the most concrete, reasonable, practical and viable assumption [see S. D. Odintsov and V. K. Oikonomou, Phys. Rev. D 90 (2014) 124083, arXiv:1410.8183 [gr-qc

  12. Transcription, intercellular variability and correlated random walk.

    PubMed

    Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar

    2008-11-01

    We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.

  13. A Computational Study of How Orientation Bias in the Lateral Geniculate Nucleus Can Give Rise to Orientation Selectivity in Primary Visual Cortex

    PubMed Central

    Kuhlmann, Levin; Vidyasagar, Trichur R.

    2011-01-01

    Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414

  14. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  15. A simple electric circuit model for proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Lazarou, Stavros; Pyrgioti, Eleftheria; Alexandridis, Antonio T.

    A simple and novel dynamic circuit model for a proton exchange membrane (PEM) fuel cell suitable for the analysis and design of power systems is presented. The model takes into account phenomena like activation polarization, ohmic polarization, and mass transport effect present in a PEM fuel cell. The proposed circuit model includes three resistors to approach adequately these phenomena; however, since for the PEM dynamic performance connection or disconnection of an additional load is of crucial importance, the proposed model uses two saturable inductors accompanied by an ideal transformer to simulate the double layer charging effect during load step changes. To evaluate the effectiveness of the proposed model its dynamic performance under load step changes is simulated. Experimental results coming from a commercial PEM fuel cell module that uses hydrogen from a pressurized cylinder at the anode and atmospheric oxygen at the cathode, clearly verify the simulation results.

  16. Can Neuroscience Help Us Do a Better Job of Teaching Music?

    ERIC Educational Resources Information Center

    Hodges, Donald A.

    2010-01-01

    We are just at the beginning stages of applying neuroscientific findings to music teaching. A simple model of the learning cycle based on neuroscience is Sense [right arrow] Integrate [right arrow] Act (sometimes modified as Act [right arrow] Sense [right arrow] Integrate). Additional components can be added to the model, including such concepts…

  17. Chaos and simple determinism in reversed field pinch plasmas: Nonlinear analysis of numerical simulation and experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Christopher A.

    In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  18. Acute Diarrheal Syndromic Surveillance

    PubMed Central

    Kam, H.J.; Choi, S.; Cho, J.P.; Min, Y.G.; Park, R.W.

    2010-01-01

    Objective In an effort to identify and characterize the environmental factors that affect the number of patients with acute diarrheal (AD) syndrome, we developed and tested two regional surveillance models including holiday and weather information in addition to visitor records, at emergency medical facilities in the Seoul metropolitan area of Korea. Methods With 1,328,686 emergency department visitor records from the National Emergency Department Information system (NEDIS) and the holiday and weather information, two seasonal ARIMA models were constructed: (1) The simple model (only with total patient number), (2) the environmental factor-added model. The stationary R-squared was utilized as an in-sample model goodness-of-fit statistic for the constructed models, and the cumulative mean of the Mean Absolute Percentage Error (MAPE) was used to measure post-sample forecast accuracy over the next 1 month. Results The (1,0,1)(0,1,1)7 ARIMA model resulted in an adequate model fit for the daily number of AD patient visits over 12 months for both cases. Among various features, the total number of patient visits was selected as a commonly influential independent variable. Additionally, for the environmental factor-added model, holidays and daily precipitation were selected as features that statistically significantly affected model fitting. Stationary R-squared values were changed in a range of 0.651-0.828 (simple), and 0.805-0.844 (environmental factor-added) with p<0.05. In terms of prediction, the MAPE values changed within 0.090-0.120 and 0.089-0.114, respectively. Conclusion The environmental factor-added model yielded better MAPE values. Holiday and weather information appear to be crucial for the construction of an accurate syndromic surveillance model for AD, in addition to the visitor and assessment records. PMID:23616829

  19. Basinwide response of the Atlantic Meridional Overturning Circulation to interannual wind forcing

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    2017-12-01

    An eddy-resolving Ocean general circulation model For the Earth Simulator (OFES) and a simple wind-driven two-layer model are used to investigate the role of momentum fluxes in driving the Atlantic Meridional Overturning Circulation (AMOC) variability throughout the Atlantic basin from 1950 to 2010. Diagnostic analysis using the OFES results suggests that interior baroclinic Rossby waves and coastal topographic waves play essential roles in modulating the AMOC interannual variability. The proposed mechanisms are verified in the context of a simple two-layer model with realistic topography and only forced by surface wind. The topographic waves communicate high-latitude anomalies into lower latitudes and account for about 50% of the AMOC interannual variability in the subtropics. In addition, the large scale Rossby waves excited by wind forcing together with topographic waves set up coherent AMOC interannual variability patterns across the tropics and subtropics. The comparisons between the simple model and OFES results suggest that a large fraction of the AMOC interannual variability in the Atlantic basin can be explained by wind-driven dynamics.

  20. Phenomenology of wall-bounded Newtonian turbulence.

    PubMed

    L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S

    2006-01-01

    We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.

  1. Mortality and economic instability: detailed analyses for Britain and comparative analyses for selected industrialized countries.

    PubMed

    Brenner, M H

    1983-01-01

    This paper discusses a first-stage analysis of the link of unemployment rates, as well as other economic, social and environmental health risk factors, to mortality rates in postwar Britain. The results presented represent part of an international study of the impact of economic change on mortality patterns in industrialized countries. The mortality patterns examined include total and infant mortality and (by cause) cardiovascular (total), cerebrovascular and heart disease, cirrhosis of the liver, and suicide, homicide and motor vehicle accidents. Among the most prominent factors that beneficially influence postwar mortality patterns in England/Wales and Scotland are economic growth and stability and health service availability. A principal detrimental factor to health is a high rate of unemployment. Additional factors that have an adverse influence on mortality rates are cigarette consumption and heavy alcohol use and unusually cold winter temperatures (especially in Scotland). The model of mortality that includes both economic changes and behavioral and environmental risk factors was successfully applied to infant mortality rates in the interwar period. In addition, the "simple" economic change model of mortality (using only economic indicators) was applied to other industrialized countries. In Canada, the United States, the United Kingdom, and Sweden, the simple version of the economic change model could be successfully applied only if the analysis was begun before World War II; for analysis beginning in the postwar era, the more sophisticated economic change model, including behavioral and environmental risk factors, was required. In France, West Germany, Italy, and Spain, by contrast, some success was achieved using the simple economic change model.

  2. Early Planetary Differentiation: Comparative Planetology

    NASA Technical Reports Server (NTRS)

    Jones, John H.

    2006-01-01

    We currently have extensive data for four different terrestrial bodies of the inner solar system: Earth, the Moon, Mars, and the Eucrite Parent Body [EPB]. All formed early cores; but all(?) have mantles with elevated concentrations of highly sidero-phile elements, suggestive of the addition of a late "veneer". Two appear to have undergone extensive differentiation consistent with a global magma ocean. One appears to be inconsistent with a simple model of "low-pressure" chondritic differentiation. Thus, there seems to be no single, simple paradigm for understand-ing early differentiation.

  3. Brachypodium distachyon genetic resources

    USDA-ARS?s Scientific Manuscript database

    Brachypodium distachyon is a well-established model species for the grass family Poaceae. It possesses an array of features that make it suited for this purpose, including a small sequenced genome, simple transformation methods, and additional functional genomics tools. However, the most critical to...

  4. Autofrecuencias de las ecuaciones de Helmholtz y Liouville para un modelo de tierra tipo Jeffreys simplificado.

    NASA Astrophysics Data System (ADS)

    Sevilla, M. J.; González-Camacho, A.

    The authors obtain expressions for the free frequencies of polar motion for an ellipsoidal, rotating and perturbed earth model constituted by an elastic mantle with an homogeneous liquid core of additional simple motion.

  5. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  6. An integrated Gaussian process regression for prediction of remaining useful life of slow speed bearings based on acoustic emission

    NASA Astrophysics Data System (ADS)

    Aye, S. A.; Heyns, P. S.

    2017-02-01

    This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.

  7. Experimental evaluation of expendable supersonic nozzle concepts

    NASA Technical Reports Server (NTRS)

    Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.

    1990-01-01

    Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.

  8. Intrinsic dimensionality predicts the saliency of natural dynamic scenes.

    PubMed

    Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt

    2012-06-01

    Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.

  9. Testing the Two-Layer Model for Correcting Clear Sky Reflectance near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, Frank; Varnai, Tamas; Levy, Rob

    2015-01-01

    A two-layer model (2LM) was developed in our earlier studies to estimate the clear sky reflectance enhancement due to cloud-molecular radiative interaction at MODIS at 0.47 micrometers. Recently, we extended the model to include cloud-surface and cloud-aerosol radiative interactions. We use the LES/SHDOM simulated 3D true radiation fields to test the 2LM for reflectance enhancement at 0.47 micrometers. We find: The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; the cloud-molecular interaction alone accounts for 70 percent of the enhancement; the cloud-surface interaction accounts for 16 percent of the enhancement; the cloud-aerosol interaction accounts for an additional 13 percent of the enhancement. We conclude that the 2LM is simple to apply and unbiased.

  10. Simple dynamical models capturing the key features of the Central Pacific El Niño.

    PubMed

    Chen, Nan; Majda, Andrew J

    2016-10-18

    The Central Pacific El Niño (CP El Niño) has been frequently observed in recent decades. The phenomenon is characterized by an anomalous warm sea surface temperature (SST) confined to the central Pacific and has different teleconnections from the traditional El Niño. Here, simple models are developed and shown to capture the key mechanisms of the CP El Niño. The starting model involves coupled atmosphere-ocean processes that are deterministic, linear, and stable. Then, systematic strategies are developed for incorporating several major mechanisms of the CP El Niño into the coupled system. First, simple nonlinear zonal advection with no ad hoc parameterization of the background SST gradient is introduced that creates coupled nonlinear advective modes of the SST. Secondly, due to the recent multidecadal strengthening of the easterly trade wind, a stochastic parameterization of the wind bursts including a mean easterly trade wind anomaly is coupled to the simple atmosphere-ocean processes. Effective stochastic noise in the wind burst model facilitates the intermittent occurrence of the CP El Niño with realistic amplitude and duration. In addition to the anomalous warm SST in the central Pacific, other major features of the CP El Niño such as the rising branch of the anomalous Walker circulation being shifted to the central Pacific and the eastern Pacific cooling with a shallow thermocline are all captured by this simple coupled model. Importantly, the coupled model succeeds in simulating a series of CP El Niño that lasts for 5 y, which resembles the two CP El Niño episodes during 1990-1995 and 2002-2006.

  11. Time delays, population, and economic development

    NASA Astrophysics Data System (ADS)

    Gori, Luca; Guerrini, Luca; Sodini, Mauro

    2018-05-01

    This research develops an augmented Solow model with population dynamics and time delays. The model produces either a single stationary state or multiple stationary states (able to characterise different development regimes). The existence of time delays may cause persistent fluctuations in both economic and demographic variables. In addition, the work identifies in a simple way the reasons why economics affects demographics and vice versa.

  12. Applying the take-grant protection model

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1990-01-01

    The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.

  13. Ciona as a Simple Chordate Model for Heart Development and Regeneration

    PubMed Central

    Evans Anderson, Heather; Christiaen, Lionel

    2016-01-01

    Cardiac cell specification and the genetic determinants that govern this process are highly conserved among Chordates. Recent studies have established the importance of evolutionarily-conserved mechanisms in the study of congenital heart defects and disease, as well as cardiac regeneration. As a basal Chordate, the Ciona model system presents a simple scaffold that recapitulates the basic blueprint of cardiac development in Chordates. Here we will focus on the development and cellular structure of the heart of the ascidian Ciona as compared to other Chordates, principally vertebrates. Comparison of the Ciona model system to heart development in other Chordates presents great potential for dissecting the genetic mechanisms that underlie congenital heart defects and disease at the cellular level and might provide additional insight into potential pathways for therapeutic cardiac regeneration. PMID:27642586

  14. SUSTAIN: a network model of category learning.

    PubMed

    Love, Bradley C; Medin, Douglas L; Gureckis, Todd M

    2004-04-01

    SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.

  15. Partial exposure of frog heart to high-potassium solution: an easily reproducible model mimicking ST segment changes.

    PubMed

    Kon, Nobuaki; Abe, Nozomu; Miyazaki, Masahiro; Mushiake, Hajime; Kazama, Itsuro

    2018-04-18

    By simply inducing burn injuries on the bullfrog heart, we previously reported a simple model of abnormal ST segment changes observed in human ischemic heart disease. In the present study, instead of inducing burn injuries, we partially exposed the surface of the frog heart to high-potassium (K + ) solution to create a concentration gradient of the extracellular K + within the myocardium. Dual recordings of ECG and the cardiac action potential demonstrated significant elevation of the ST segment and the resting membrane potential, indicating its usefulness as a simple model of heart injury. Additionally, from our results, Na + /K + -ATPase activity was thought to be primarily responsible for generating the K + concentration gradient and inducing the ST segment changes in ECG.

  16. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.

  17. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  18. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed Central

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-01-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328

  19. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-09-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.

  20. Circuit models and three-dimensional electromagnetic simulations of a 1-MA linear transformer driver stage

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.

    2010-09-01

    A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.

  1. Anthropogenic heat flux: advisable spatial resolutions when input data are scarce

    NASA Astrophysics Data System (ADS)

    Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.

    2018-02-01

    Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.

  2. V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening.

    PubMed

    Teich, Andrew F; Qian, Ning

    2010-03-01

    Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.

  3. Tenax extraction as a simple approach to improve environmental risk assessments.

    PubMed

    Harwood, Amanda D; Nutile, Samuel A; Landrum, Peter F; Lydy, Michael J

    2015-07-01

    It is well documented that using exhaustive chemical extractions is not an effective means of assessing exposure of hydrophobic organic compounds in sediments and that bioavailability-based techniques are an improvement over traditional methods. One technique that has shown special promise as a method for assessing the bioavailability of hydrophobic organic compounds in sediment is the use of Tenax-extractable concentrations. A 6-h or 24-h single-point Tenax-extractable concentration correlates to both bioaccumulation and toxicity. This method has demonstrated effectiveness for several hydrophobic organic compounds in various organisms under both field and laboratory conditions. In addition, a Tenax bioaccumulation model was developed for multiple compounds relating 24-h Tenax-extractable concentrations to oligochaete tissue concentrations exposed in both the laboratory and field. This model has demonstrated predictive capacity for additional compounds and species. Use of Tenax-extractable concentrations to estimate exposure is rapid, simple, straightforward, and relatively inexpensive, as well as accurate. Therefore, this method would be an invaluable tool if implemented in risk assessments. © 2015 SETAC.

  4. Overview and extensions of a system for routing directed graphs on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1988-01-01

    Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.

  5. Removal of phosphate from greenhouse wastewater using hydrated lime.

    PubMed

    Dunets, C Siobhan; Zheng, Youbin

    2014-01-01

    Phosphate (P) contamination in nutrient-laden wastewater is currently a major topic of discussion in the North American greenhouse industry. Precipitation of P as calcium phosphate minerals using hydrated lime could provide a simple, inexpensive method for retrieval. A combination of batch experiments and chemical equilibrium modelling was used to confirm the viability of this P removal method and determine lime addition rates and pH requirements for greenhouse wastewater of varying nutrient compositions. Lime: P ratio (molar ratio of CaMg(OH)₄: PO₄‒P) provided a consistent parameter for estimating lime addition requirements regardless of initial P concentration, with a ratio of 1.5 providing around 99% removal of dissolved P. Optimal P removal occurred when lime addition increased the pH from 8.6 to 9.0, suggesting that pH monitoring during the P removal process could provide a simple method for ensuring consistent adherence to P removal standards. A Visual MINTEQ model, validated using experimental data, provided a means of predicting lime addition and pH requirements as influenced by changes in other parameters of the lime-wastewater system (e.g. calcium concentration, temperature, and initial wastewater pH). Hydrated lime addition did not contribute to the removal of macronutrient elements such as nitrate and ammonium, but did decrease the concentration of some micronutrients. This study provides basic guidance for greenhouse operators to use hydrated lime for phosphate removal from greenhouse wastewater.

  6. Ab initio study of GaAs(100) surface stability over As2, H2 and N2 as a model for vapor-phase epitaxy of GaAs1-xNx

    NASA Astrophysics Data System (ADS)

    Valencia, Hubert; Kangawa, Yoshihiro; Kakimoto, Koichi

    2015-12-01

    GaAs(100) c(4×4) surfaces were examined by ab initio calculations, under As2, H2 and N2 gas mixed conditions as a model for GaAs1-xNx vapor-phase epitaxy (VPE) on GaAs(100). Using a simple model consisting of As2 and H2 molecules adsorptions and As/N atom substitutions, it was shown to be possible to examine the crystal growth behavior considering the relative stability of the resulting surfaces against the chemical potential of As2, H2 and N2 gases. Such simple model allows us to draw a picture of the temperature and pressure stability domains for each surfaces that can be linked to specific growth conditions, directly. We found that, using this simple model, it is possible to explain the different N-incorporation regimes observed experimentally at different temperatures, and to predict the transition temperature between these regimes. Additionally, a rational explanation of N-incorporation ratio for each of these regimes is provided. Our model should then lead to a better comprehension and control of the experimental conditions needed to realize a high quality VPE of GaAs1-xNx.

  7. Photometric studies of Saturn's ring and eclipses of the Galilean satellites

    NASA Technical Reports Server (NTRS)

    Brunk, W. E.

    1972-01-01

    Reliable data defining the photometric function of the Saturn ring system at visual wavelengths are interpreted in terms of a simple scattering model. To facilitate the analysis, new photographic photometry of the ring has been carried out and homogeneous measurements of the mean surface brightness are presented. The ring model adopted is a plane parallel slab of isotropically scattering particles; the single scattering albedo and the perpendicular optical thickness are both arbitrary. Results indicate that primary scattering is inadequate to describe the photometric properties of the ring: multiple scattering predominates for all angles of tilt with respect to the Sun and earth. In addition, the scattering phase function of the individual particles is significantly anisotropic: they scatter preferentially towards the sun. Photoelectric photometry of Ganymede during its eclipse by Jupiter indicate that neither a simple reflecting-layer model nor a semi-infinite homogeneous scattering model provides an adequate physical description of the Jupiter atmosphere.

  8. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  9. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  10. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  11. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  12. Modeling of two-phase porous flow with damage

    NASA Astrophysics Data System (ADS)

    Cai, Z.; Bercovici, D.

    2009-12-01

    Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.

  13. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

  14. Operator Priming and Generalization of Practice in Adults' Simple Arithmetic

    ERIC Educational Resources Information Center

    Chen, Yalin; Campbell, Jamie I. D.

    2016-01-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…

  15. No Generalization of Practice for Nonzero Simple Addition

    ERIC Educational Resources Information Center

    Campbell, Jamie I. D.; Beech, Leah C.

    2014-01-01

    Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…

  16. Chaos in plasma simulation and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, C.; Newman, D.E.; Sprott, J.C.

    1993-09-01

    We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less

  17. Estimating Additive and Non-Additive Genetic Variances and Predicting Genetic Merits Using Genome-Wide Dense Single Nucleotide Polymorphism Markers

    PubMed Central

    Su, Guosheng; Christensen, Ole F.; Ostersen, Tage; Henryon, Mark; Lund, Mogens S.

    2012-01-01

    Non-additive genetic variation is usually ignored when genome-wide markers are used to study the genetic architecture and genomic prediction of complex traits in human, wild life, model organisms or farm animals. However, non-additive genetic effects may have an important contribution to total genetic variation of complex traits. This study presented a genomic BLUP model including additive and non-additive genetic effects, in which additive and non-additive genetic relation matrices were constructed from information of genome-wide dense single nucleotide polymorphism (SNP) markers. In addition, this study for the first time proposed a method to construct dominance relationship matrix using SNP markers and demonstrated it in detail. The proposed model was implemented to investigate the amounts of additive genetic, dominance and epistatic variations, and assessed the accuracy and unbiasedness of genomic predictions for daily gain in pigs. In the analysis of daily gain, four linear models were used: 1) a simple additive genetic model (MA), 2) a model including both additive and additive by additive epistatic genetic effects (MAE), 3) a model including both additive and dominance genetic effects (MAD), and 4) a full model including all three genetic components (MAED). Estimates of narrow-sense heritability were 0.397, 0.373, 0.379 and 0.357 for models MA, MAE, MAD and MAED, respectively. Estimated dominance variance and additive by additive epistatic variance accounted for 5.6% and 9.5% of the total phenotypic variance, respectively. Based on model MAED, the estimate of broad-sense heritability was 0.506. Reliabilities of genomic predicted breeding values for the animals without performance records were 28.5%, 28.8%, 29.2% and 29.5% for models MA, MAE, MAD and MAED, respectively. In addition, models including non-additive genetic effects improved unbiasedness of genomic predictions. PMID:23028912

  18. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  19. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  20. Modeling the growth and branching of plants: A simple rod-based model

    NASA Astrophysics Data System (ADS)

    Faruk Senan, Nur Adila; O'Reilly, Oliver M.; Tresierras, Timothy N.

    A rod-based model for plant growth and branching is developed in this paper. Specifically, Euler's theory of the elastica is modified to accommodate growth and remodeling. In addition, branching is characterized using a configuration force and evolution equations are postulated for the flexural stiffness and intrinsic curvature. The theory is illustrated with examples of multiple static equilibria of a branched plant and the remodeling and tip growth of a plant stem under gravitational loading.

  1. A Bayesian Attractor Model for Perceptual Decision Making

    PubMed Central

    Bitzer, Sebastian; Bruineberg, Jelle; Kiebel, Stefan J.

    2015-01-01

    Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Although current consensus states that the brain accumulates evidence extracted from noisy sensory information, open questions remain about how this simple model relates to other perceptual phenomena such as flexibility in decisions, decision-dependent modulation of sensory gain, or confidence about a decision. We propose a novel approach of how perceptual decisions are made by combining two influential formalisms into a new model. Specifically, we embed an attractor model of decision making into a probabilistic framework that models decision making as Bayesian inference. We show that the new model can explain decision making behaviour by fitting it to experimental data. In addition, the new model combines for the first time three important features: First, the model can update decisions in response to switches in the underlying stimulus. Second, the probabilistic formulation accounts for top-down effects that may explain recent experimental findings of decision-related gain modulation of sensory neurons. Finally, the model computes an explicit measure of confidence which we relate to recent experimental evidence for confidence computations in perceptual decision tasks. PMID:26267143

  2. The solidification velocity of nickel and titanium alloys

    NASA Astrophysics Data System (ADS)

    Altgilbers, Alex Sho

    2002-09-01

    The solidification velocity of several Ni-Ti, Ni-Sn, Ni-Si, Ti-Al and Ti-Ni alloys were measured as a function of undercooling. From these results, a model for alloy solidification was developed that can be used to predict the solidification velocity as a function of undercooling more accurately. During this investigation a phenomenon was observed in the solidification velocity that is a direct result of the addition of the various alloying elements to nickel and titanium. The additions of the alloying elements resulted in an additional solidification velocity plateau at intermediate undercoolings. Past work has shown a solidification velocity plateau at high undercoolings can be attributed to residual oxygen. It is shown that a logistic growth model is a more accurate model for predicting the solidification of alloys. Additionally, a numerical model is developed from simple description of the effect of solute on the solidification velocity, which utilizes a Boltzmann logistic function to predict the plateaus that occur at intermediate undercoolings.

  3. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Vincent K., E-mail: vincent.shen@nist.gov; Siderius, Daniel W.

    2014-06-28

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phasemore » transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called “breathing” of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.« less

  4. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    NASA Astrophysics Data System (ADS)

    Shen, Vincent K.; Siderius, Daniel W.

    2014-06-01

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phase transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called "breathing" of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.

  5. Calculation of density of states for modeling photoemission using method of moments

    NASA Astrophysics Data System (ADS)

    Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.

    2017-09-01

    Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.

  6. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  7. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  8. A New Publicly Available Chemical Query Language, CSRML, to support Chemotype Representations for Application to Data-Mining and Modeling

    EPA Science Inventory

    A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transfor...

  9. The functional architectures of addition and subtraction: Network discovery using fMRI and DCM.

    PubMed

    Yang, Yang; Zhong, Ning; Friston, Karl; Imamura, Kazuyuki; Lu, Shengfu; Li, Mi; Zhou, Haiyan; Wang, Haiyuan; Li, Kuncheng; Hu, Bin

    2017-06-01

    The neuronal mechanisms underlying arithmetic calculations are not well understood but the differences between mental addition and subtraction could be particularly revealing. Using fMRI and dynamic causal modeling (DCM), this study aimed to identify the distinct neuronal architectures engaged by the cognitive processes of simple addition and subtraction. Our results revealed significantly greater activation during subtraction in regions along the dorsal pathway, including the left inferior frontal gyrus (IFG), middle portion of dorsolateral prefrontal cortex (mDLPFC), and supplementary motor area (SMA), compared with addition. Subsequent analysis of the underlying changes in connectivity - with DCM - revealed a common circuit processing basic (numeric) attributes and the retrieval of arithmetic facts. However, DCM showed that addition was more likely to engage (numeric) retrieval-based circuits in the left hemisphere, while subtraction tended to draw on (magnitude) processing in bilateral parietal cortex, especially the right intraparietal sulcus (IPS). Our findings endorse previous hypotheses about the differences in strategic implementation, dominant hemisphere, and the neuronal circuits underlying addition and subtraction. Moreover, for simple arithmetic, our connectivity results suggest that subtraction calls on more complex processing than addition: auxiliary phonological, visual, and motor processes, for representing numbers, were engaged by subtraction, relative to addition. Hum Brain Mapp 38:3210-3225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Rocket Engine Oscillation Diagnostics

    NASA Technical Reports Server (NTRS)

    Nesman, Tom; Turner, James E. (Technical Monitor)

    2002-01-01

    Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.

  11. Markov Decision Process Measurement Model.

    PubMed

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  12. An Investigation of the Static Force Balance of a Model Railgun

    DTIC Science & Technology

    2007-06-01

    this simple circuit diagram two 950 CCA batteries are passed through a variable resistor (R1) to limit the current applied to the model railgun (R2...of a known value and placed a voltmeter across the resistor . For additional protection in these early trials we inserted an equivalent 1kA fuse...our variable resistor . Current then passed through the resistor into the model gun, through a volt-meter with a known resistance, into a kilo-amp

  13. A case-mix classification system for explaining healthcare costs using administrative data in Italy.

    PubMed

    Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico

    2018-03-04

    The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.

  14. Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity

    NASA Astrophysics Data System (ADS)

    Thual, S.; Majda, A.; Chen, N.

    2017-12-01

    The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.

  15. Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping

    NASA Astrophysics Data System (ADS)

    Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady

    2017-04-01

    When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.

  16. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants

    PubMed Central

    Yu, Zheng-Yong; Liu, Qiang; Liu, Yunhan

    2017-01-01

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi–Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings. PMID:28792487

  17. Multiaxial Fatigue Damage Parameter and Life Prediction without Any Additional Material Constants.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-08-09

    Based on the critical plane approach, a simple and efficient multiaxial fatigue damage parameter with no additional material constants is proposed for life prediction under uniaxial/multiaxial proportional and/or non-proportional loadings for titanium alloy TC4 and nickel-based superalloy GH4169. Moreover, two modified Ince-Glinka fatigue damage parameters are put forward and evaluated under different load paths. Results show that the generalized strain amplitude model provides less accurate life predictions in the high cycle life regime and is better for life prediction in the low cycle life regime; however, the generalized strain energy model is relatively better for high cycle life prediction and is conservative for low cycle life prediction under multiaxial loadings. In addition, the Fatemi-Socie model is introduced for model comparison and its additional material parameter k is found to not be a constant and its usage is discussed. Finally, model comparison and prediction error analysis are used to illustrate the superiority of the proposed damage parameter in multiaxial fatigue life prediction of the two aviation alloys under various loadings.

  18. A Web Application For Visualizing Empirical Models of the Space-Atmosphere Interface Region: AtModWeb

    NASA Astrophysics Data System (ADS)

    Knipp, D.; Kilcommons, L. M.; Damas, M. C.

    2015-12-01

    We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?

  19. Nonlinear Dynamic Inversion Baseline Control Law: Flight-Test Results for the Full-scale Advanced Systems Testbed F/A-18 Airplane

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J.

    2011-01-01

    A model reference nonlinear dynamic inversion control law has been developed to provide a baseline controller for research into simple adaptive elements for advanced flight control laws. This controller has been implemented and tested in a hardware-in-the-loop simulation and in flight. The flight results agree well with the simulation predictions and show good handling qualities throughout the tested flight envelope with some noteworthy deficiencies highlighted both by handling qualities metrics and pilot comments. Many design choices and implementation details reflect the requirements placed on the system by the nonlinear flight environment and the desire to keep the system as simple as possible to easily allow the addition of the adaptive elements. The flight-test results and how they compare to the simulation predictions are discussed, along with a discussion about how each element affected pilot opinions. Additionally, aspects of the design that performed better than expected are presented, as well as some simple improvements that will be suggested for follow-on work.

  20. A neural computational model for animal's time-to-collision estimation.

    PubMed

    Wang, Ling; Yao, Dezhong

    2013-04-17

    The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.

  1. Modulation of additive and interactive effects in lexical decision by trial history.

    PubMed

    Masson, Michael E J; Kliegl, Reinhold

    2013-05-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model analyses applied to 2 lexical decision experiments indicating that apparent additive effects can be the product of aggregating over- and underadditive interaction effects that are modulated by recent trial history, particularly the lexical status and stimulus quality of the previous trial's target. Even a simple practice effect expressed as improved response speed across trials was powerfully modulated by the nature of the previous target item. These results suggest that additivity and interaction between factors may reflect trial-to-trial variation in stimulus representations and decision processes rather than fundamental differences in processing architecture.

  2. Evaluating the cost effectiveness of environmental projects: Case studies in aerospace and defense

    NASA Technical Reports Server (NTRS)

    Shunk, James F.

    1995-01-01

    Using the replacement technology of high pressure waterjet decoating systems as an example, a simple methodology is presented for developing a cost effectiveness model. The model uses a four-step process to formulate an economic justification designed for presentation to decision makers as an assessment of the value of the replacement technology over conventional methods. Three case studies from major U.S. and international airlines are used to illustrate the methodology and resulting model. Tax and depreciation impacts are also presented as potential additions to the model.

  3. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  4. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  5. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE PAGES

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    2017-08-04

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  6. A simple quantum mechanical treatment of scattering in nanoscale transistors

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.

    2003-05-01

    We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.

  7. Spatiotemporal modelling of viral infection dynamics

    NASA Astrophysics Data System (ADS)

    Beauchemin, Catherine

    Viral kinetics have been studied extensively in the past through the use of ordinary differential equations describing the time evolution of the diseased state in a spatially well-mixed medium. However, emerging spatial structures such as localized populations of dead cells might affect the spread of infection, similar to the manner in which a counter-fire can stop a forest fire from spreading. In the first phase of the project, a simple two-dimensional cellular automaton model of viral infections was developed. It was validated against clinical immunological data for uncomplicated influenza A infections and shown to be accurate enough to adequately model them. In the second phase of the project, the simple two-dimensional cellular automaton model was used to investigate the effects of relaxing the well-mixed assumption on viral infection dynamics. It was shown that grouping the initially infected cells into patches rather than distributing them uniformly on the grid reduced the infection rate as only cells on the perimeter of the patch have healthy neighbours to infect. Use of a local epithelial cell regeneration rule where dead cells are replaced by healthy cells when an immediate neighbour divides was found to result in more extensive damage of the epithelium and yielded a better fit to experimental influenza A infection data than a global regeneration rule based on division rate of healthy cell. Finally, the addition of immune cell at the site of infection was found to be a better strategy at low infection levels, while addition at random locations on the grid was the better strategy at high infection level. In the last project, the movement of T cells within lymph nodes in the absence of antigen, was investigated. Based on individual T cell track data captured by two-photon microscopy experiments in vivo, a simple model was proposed for the motion of T cells. This is the first step towards the implementation of a more realistic spatiotemporal model of HIV than those proposed thus far.

  8. Simple versus complex models of trait evolution and stasis as a response to environmental change

    NASA Astrophysics Data System (ADS)

    Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott

    2015-04-01

    Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.

  9. Equivalent circuit models for interpreting impedance perturbation spectroscopy data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell

    2004-07-01

    As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.

  10. Giftedness and Genetics: The Emergenic-Epigenetic Model and Its Implications

    ERIC Educational Resources Information Center

    Simonton, Dean Keith

    2005-01-01

    The genetic endowment underlying giftedness may operate in a far more complex manner than often expressed in most theoretical accounts of the phenomenon. First, an endowment may be emergenic. That is, a gift may consist of multiple traits (multidimensional) that are inherited in a multiplicative (configurational), rather than an additive (simple)…

  11. The Development from Effortful to Automatic Processing in Mathematical Cognition.

    ERIC Educational Resources Information Center

    Kaye, Daniel B.; And Others

    This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…

  12. Self-Selection, Optimal Income Taxation, and Redistribution

    ERIC Educational Resources Information Center

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  13. Influence of the Mesh Geometry Evolution on Gearbox Dynamics during Its Maintenance

    NASA Astrophysics Data System (ADS)

    Dąbrowski, Z.; Dziurdź, J.; Klekot, G.

    2017-12-01

    Toothed gears constitute the necessary elements of power transmission systems. They are applied as stationary devices in drive systems of road vehicles, ships and crafts as well as airplanes and helicopters. One of the problems related to the toothed gears usage is the determination of their technical state or its evolutions. Assuming that the gear slippage velocity is attributed to vibrations and noises generated by cooperating toothed wheels, the application of a simple cooperation model of rolled wheels of skew teeth is proposed for the analysis of the mesh evolution influence on the gear dynamics. In addition, an example of utilising an ordinary coherence function for investigating evolutionary mesh changes related to the effects impossible to be described by means of the simple kinematic model is presented.

  14. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  15. Generation of multicellular tumor spheroids by the hanging-drop method.

    PubMed

    Timmins, Nicholas E; Nielsen, Lars K

    2007-01-01

    Owing to their in vivo-like characteristics, three-dimensional (3D) multicellular tumor spheroid (MCTS) cultures are gaining increasing popularity as an in vitro model of tumors. A straightforward and simple approach to the cultivation of these MCTS is the hanging-drop method. Cells are suspended in droplets of medium, where they develop into coherent 3D aggregates and are readily accessed for analysis. In addition to being simple, the method eliminates surface interactions with an underlying substratum (e.g., polystyrene plastic or agarose), requires only a low number of starting cells, and is highly reproducible. This method has also been applied to the co-cultivation of mixed cell populations, including the co-cultivation of endothelial cells and tumor cells as a model of early tumor angiogenesis.

  16. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  17. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE PAGES

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...

    2017-08-23

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  18. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    NASA Astrophysics Data System (ADS)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.

    2017-08-01

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  19. Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniwal, Ankit; Lewicki, Marek; Wells, James D.

    We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.

  20. On one-parametric formula relating the frequencies of twin-peak quasi-periodic oscillations

    NASA Astrophysics Data System (ADS)

    Török, Gabriel; Goluchová, Kateřina; Šrámková, Eva; Horák, Jiří; Bakala, Pavel; Urbanec, Martin

    2018-01-01

    Twin-peak quasi-periodic oscillations (QPOs) are observed in several low-mass X-ray binary systems containing neutron stars. Timing the analysis of X-ray fluxes of more than dozen of such systems reveals remarkable correlations between the frequencies of two characteristic peaks present in the power density spectra. The individual correlations clearly differ, but they roughly follow a common individual pattern. High values of measured QPO frequencies and strong modulation of the X-ray flux both suggest that the observed correlations are connected to orbital motion in the innermost part of an accretion disc. Several attempts to model these correlations with simple geodesic orbital models or phenomenological relations have failed in the past. We find and explore a surprisingly simple analytic relation that reproduces individual correlations for a group of several sources through a single parameter. When an additional free parameter is considered within our relation, it well reproduces the data of a large group of 14 sources. The very existence and form of this simple relation support the hypothesis of the orbital origin of QPOs and provide the key for further development of QPO models. We discuss a possible physical interpretation of our relation's parameters and their links to concrete QPO models.

  1. A Simple Geometrical Model for Calculation of the Effective Emissivity in Blackbody Cylindrical Cavities

    NASA Astrophysics Data System (ADS)

    De Lucas, Javier

    2015-03-01

    A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.

  2. A comparison of the genetic basis of wing size divergence in three parallel body size clines of Drosophila melanogaster.

    PubMed Central

    Gilchrist, A S; Partridge, L

    1999-01-01

    Body size clines in Drosophila melanogaster have been documented in both Australia and South America, and may exist in Southern Africa. We crossed flies from the northern and southern ends of each of these clines to produce F(1), F(2), and first backcross generations. Our analysis of generation means for wing area and wing length produced estimates of the additive, dominance, epistatic, and maternal effects underlying divergence within each cline. For both females and males of all three clines, the generation means were adequately described by these parameters, indicating that linkage and higher order interactions did not contribute significantly to wing size divergence. Marked differences were apparent between the clines in the occurrence and magnitude of the significant genetic parameters. No cline was adequately described by a simple additive-dominance model, and significant epistatic and maternal effects occurred in most, but not all, of the clines. Generation variances were also analyzed. Only one cline was described sufficiently by a simple additive variance model, indicating significant epistatic, maternal, or linkage effects in the remaining two clines. The diversity in genetic architecture of the clines suggests that natural selection has produced similar phenotypic divergence by different combinations of gene action and interaction. PMID:10581284

  3. A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.

    PubMed

    Jankovic, M V

    2003-01-01

    A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.

  4. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  5. Allele-sharing models: LOD scores and accurate linkage tests.

    PubMed Central

    Kong, A; Cox, N J

    1997-01-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087

  6. Random graph models of social networks.

    PubMed

    Newman, M E J; Watts, D J; Strogatz, S H

    2002-02-19

    We describe some new exactly solvable models of the structure of social networks, based on random graphs with arbitrary degree distributions. We give models both for simple unipartite networks, such as acquaintance networks, and bipartite networks, such as affiliation networks. We compare the predictions of our models to data for a number of real-world social networks and find that in some cases, the models are in remarkable agreement with the data, whereas in others the agreement is poorer, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.

  7. Applied and engineering versions of the theory of elastoplastic processes of active complex loading part 2: Identification and verification

    NASA Astrophysics Data System (ADS)

    Peleshko, V. A.

    2016-06-01

    The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.

  8. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    NASA Astrophysics Data System (ADS)

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; de Zeeuw, Chris I.

    2016-11-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity.

  9. Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments

    PubMed Central

    Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; De Zeeuw, Chris I.

    2016-01-01

    Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity. PMID:27805050

  10. A Simple Model to Quantify Radiolytic Production following Electron Emission from Heavy-Atom Nanoparticles Irradiated in Liquid Suspensions.

    PubMed

    Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred

    2015-11-01

    We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.

  11. Updated determination of stress parameters for nine well-recorded earthquakes in eastern North America

    USGS Publications Warehouse

    Boore, David M.

    2012-01-01

    Stress parameters (Δσ) are determined for nine relatively well-recorded earthquakes in eastern North America for ten attenuation models. This is an update of a previous study by Boore et al. (2010). New to this paper are observations from the 2010 Val des Bois earthquake, additional observations for the 1988 Saguenay and 2005 Riviere du Loup earthquakes, and consideration of six attenuation models in addition to the four used in the previous study. As in that study, it is clear that Δσ depends strongly on the rate of geometrical spreading (as well as other model parameters). The observations necessary to determine conclusively which attenuation model best fits the data are still lacking. At this time, a simple 1/R model seems to give as good an overall fit to the data as more complex models.

  12. Simple neck pain questions used in surveys, evaluated in relation to health outcomes: a cohort study

    PubMed Central

    2012-01-01

    Background The high prevalence of pain reported in many epidemiological studies, and the degree to which this prevalence reflects severe pain is under discussion in the literature. The aim of the present study was to evaluate use of the simple neck pain questions commonly included in large epidemiological survey studies with respect to aspects of health. We investigated if and how an increase in number of days with pain is associated with reduction in health outcomes. Methods A cohort of university students (baseline age 19–25 years) were recruited in 2002 and followed annually for 4 years. The baseline response rate was 69% which resulted in 1200 respondents (627 women, 573 men). Participants were asked about present and past pain and perceptions of their general health, sleep disturbance, stress and energy levels, and general performance. The data were analyzed using a mixed model for repeated measurements and a random intercept logistic model. Results When reporting present pain, participants also reported lower prevalence of very good health, higher stress and sleep disturbance scores and lower energy score. Among those with current neck pain, additional questions characterizing the pain such as duration (categorized), additional pain sites and decreased general performance were associated with lower probability of very good health and higher amounts of sleep disturbance. Knowing about the presence or not of pain explains more of the variation in health between individuals, than within individuals. Conclusion This study of young university students has demonstrated that simple neck pain survey questions capture features of pain that affect aspects of health such as perceived general health, sleep disturbance, mood in terms of stress and energy. Simple pain questions are more useful for group descriptions than for describing or following pain in an individual. PMID:23102060

  13. Simple neck pain questions used in surveys, evaluated in relation to health outcomes: a cohort study.

    PubMed

    Grimby-Ekman, Anna; Hagberg, Mats

    2012-10-26

    The high prevalence of pain reported in many epidemiological studies, and the degree to which this prevalence reflects severe pain is under discussion in the literature. The aim of the present study was to evaluate use of the simple neck pain questions commonly included in large epidemiological survey studies with respect to aspects of health. We investigated if and how an increase in number of days with pain is associated with reduction in health outcomes. A cohort of university students (baseline age 19-25 years) were recruited in 2002 and followed annually for 4 years. The baseline response rate was 69% which resulted in 1200 respondents (627 women, 573 men). Participants were asked about present and past pain and perceptions of their general health, sleep disturbance, stress and energy levels, and general performance. The data were analyzed using a mixed model for repeated measurements and a random intercept logistic model. When reporting present pain, participants also reported lower prevalence of very good health, higher stress and sleep disturbance scores and lower energy score. Among those with current neck pain, additional questions characterizing the pain such as duration (categorized), additional pain sites and decreased general performance were associated with lower probability of very good health and higher amounts of sleep disturbance. Knowing about the presence or not of pain explains more of the variation in health between individuals, than within individuals. This study of young university students has demonstrated that simple neck pain survey questions capture features of pain that affect aspects of health such as perceived general health, sleep disturbance, mood in terms of stress and energy. Simple pain questions are more useful for group descriptions than for describing or following pain in an individual.

  14. A new computational growth model for sea urchin skeletons.

    PubMed

    Zachos, Louis G

    2009-08-07

    A new computational model has been developed to simulate growth of regular sea urchin skeletons. The model incorporates the processes of plate addition and individual plate growth into a composite model of whole-body (somatic) growth. A simple developmental model based on hypothetical morphogens underlies the assumptions used to define the simulated growth processes. The data model is based on a Delaunay triangulation of plate growth center points, using the dual Voronoi polygons to define plate topologies. A spherical frame of reference is used for growth calculations, with affine deformation of the sphere (based on a Young-Laplace membrane model) to result in an urchin-like three-dimensional form. The model verifies that the patterns of coronal plates in general meet the criteria of Voronoi polygonalization, that a morphogen/threshold inhibition model for plate addition results in the alternating plate addition pattern characteristic of sea urchins, and that application of the Bertalanffy growth model to individual plates results in simulated somatic growth that approximates that seen in living urchins. The model suggests avenues of research that could explain some of the distinctions between modern sea urchins and the much more disparate groups of forms that characterized the Paleozoic Era.

  15. A simple analytical infiltration model for short-duration rainfall

    NASA Astrophysics Data System (ADS)

    Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming

    2017-12-01

    Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.

  16. Conservative Exposure Predictions for Rapid Risk Assessment of Phase-Separated Additives in Medical Device Polymers.

    PubMed

    Chandrasekar, Vaishnavi; Janes, Dustin W; Saylor, David M; Hood, Alan; Bajaj, Akhil; Duncan, Timothy V; Zheng, Jiwen; Isayeva, Irada S; Forrey, Christopher; Casey, Brendan J

    2018-01-01

    A novel approach for rapid risk assessment of targeted leachables in medical device polymers is proposed and validated. Risk evaluation involves understanding the potential of these additives to migrate out of the polymer, and comparing their exposure to a toxicological threshold value. In this study, we propose that a simple diffusive transport model can be used to provide conservative exposure estimates for phase separated color additives in device polymers. This model has been illustrated using a representative phthalocyanine color additive (manganese phthalocyanine, MnPC) and polymer (PEBAX 2533) system. Sorption experiments of MnPC into PEBAX were conducted in order to experimentally determine the diffusion coefficient, D = (1.6 ± 0.5) × 10 -11  cm 2 /s, and matrix solubility limit, C s  = 0.089 wt.%, and model predicted exposure values were validated by extraction experiments. Exposure values for the color additive were compared to a toxicological threshold for a sample risk assessment. Results from this study indicate that a diffusion model-based approach to predict exposure has considerable potential for use as a rapid, screening-level tool to assess the risk of color additives and other small molecule additives in medical device polymers.

  17. Evaporation estimation of rift valley lakes: comparison of models.

    PubMed

    Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe

    2009-01-01

    Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.

  18. Characterizing Aeroelastic Systems Using Eigenanalysis, Explicitly Retaining The Aerodynamic Degrees of Freedom

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Dowell, Earl H.

    2001-01-01

    Discrete time aeroelastic models with explicitly retained aerodynamic modes have been generated employing a time marching vortex lattice aerodynamic model. This paper presents analytical results from eigenanalysis of these models. The potential of these models to calculate the behavior of modes that represent damped system motion (noncritical modes) in addition to the simple harmonic modes is explored. A typical section with only structural freedom in pitch is examined. The eigenvalues are examined and compared to experimental data. Issues regarding the convergence of the solution with regard to refining the aerodynamic discretization are investigated. Eigenvector behavior is examined; the eigenvector associated with a particular eigenvalue can be viewed as the set of modal participation factors for that particular mode. For the present formulation of the equations of motion, the vorticity for each aerodynamic element appears explicitly as an element of each eigenvector in addition to the structural dynamic generalized coordinates. Thus, modal participation of the aerodynamic degrees of freedom can be assessed in M addition to participation of structural degrees of freedom.

  19. Simple additive effects are rare: A quantitative review of plant biomass and soil process responses to combined manipulations of CO2 and temperature

    USDA-ARS?s Scientific Manuscript database

    In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...

  20. Keeping Things Simple: Why the Human Development Index Should Not Diverge from Its Equal Weights Assumption

    ERIC Educational Resources Information Center

    Stapleton, Lee M.; Garrod, Guy D.

    2007-01-01

    Using a range of statistical criteria rooted in Information Theory we show that there is little justification for relaxing the equal weights assumption underlying the United Nation's Human Development Index (HDI) even if the true HDI diverges significantly from this assumption. Put differently, the additional model complexity that unequal weights…

  1. Pi in the Sky: Hands-on Mathematical Activities for Teaching Astronomy.

    ERIC Educational Resources Information Center

    Pethoud, Robert

    This book of activities was designed to provide students with the opportunity to create mental models of concepts in astronomy while using simple, homemade tools. In addition, these sequential, hands-on activities are to help students see how scientific knowledge is obtained. The introduction describes the rationale for the book and describes the…

  2. Determination of dynamic variations in the optical properties of graphene oxide in response to gas exposure based on thin-film interference.

    PubMed

    Tabassum, Shawana; Dong, Liang; Kumar, Ratnesh

    2018-03-05

    We present an effective yet simple approach to study the dynamic variations in optical properties (such as the refractive index (RI)) of graphene oxide (GO) when exposed to gases in the visible spectral region, using the thin-film interference method. The dynamic variations in the complex refractive index of GO in response to exposure to a gas is an important factor affecting the performance of GO-based gas sensors. In contrast to the conventional ellipsometry, this method alleviates the need of selecting a dispersion model from among a list of model choices, which is limiting if an applicable model is not known a priori. In addition, the method used is computationally simpler, and does not need to employ any functional approximations. Further advantage over ellipsometry is that no bulky optics is required, and as a result it can be easily integrated into the sensing system, thereby allowing the reliable, simple, and dynamic evaluation of the optical performance of any GO-based gas sensor. In addition, the derived values of the dynamically changing RI values of the GO layer obtained from the method we have employed are corroborated by comparing with the values obtained from ellipsometry.

  3. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  4. Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)

    PubMed Central

    Mora, Rafael Bardera

    2014-01-01

    Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646

  5. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  6. Some anticipated contributions to core fluid dynamics from the GRM

    NASA Technical Reports Server (NTRS)

    Vanvorhies, C.

    1985-01-01

    It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.

  7. Phase-plane analysis of the totally asymmetric simple exclusion process with binding kinetics and switching between antiparallel lanes

    PubMed Central

    Kuan, Hui-Shun; Betterton, Meredith D.

    2016-01-01

    Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345

  8. Electrical conductivity of metal powders under pressure

    NASA Astrophysics Data System (ADS)

    Montes, J. M.; Cuevas, F. G.; Cintas, J.; Urban, P.

    2011-12-01

    A model for calculating the electrical conductivity of a compressed powder mass consisting of oxide-coated metal particles has been derived. A theoretical tool previously developed by the authors, the so-called `equivalent simple cubic system', was used in the model deduction. This tool is based on relating the actual powder system to an equivalent one consisting of deforming spheres packed in a simple cubic lattice, which is much easier to examine. The proposed model relates the effective electrical conductivity of the powder mass under compression to its level of porosity. Other physically measurable parameters in the model are the conductivities of the metal and oxide constituting the powder particles, their radii, the mean thickness of the oxide layer and the tap porosity of the powder. Two additional parameters controlling the effect of the descaling of the particle oxide layer were empirically introduced. The proposed model was experimentally verified by measurements of the electrical conductivity of aluminium, bronze, iron, nickel and titanium powders under pressure. The consistency between theoretical predictions and experimental results was reasonably good in all cases.

  9. Shell model for drag reduction with polymer additives in homogeneous turbulence.

    PubMed

    Benzi, Roberto; De Angelis, Elisabetta; Govindarajan, Rama; Procaccia, Itamar

    2003-07-01

    Recent direct numerical simulations of the finite-extensibility nonlinear elastic dumbbell model with the Peterlin approximation of non-Newtonian hydrodynamics revealed that the phenomenon of drag reduction by polymer additives exists (albeit in reduced form) also in homogeneous turbulence. We use here a simple shell model for homogeneous viscoelastic flows, which recaptures the essential observations of the full simulations. The simplicity of the shell model allows us to offer a transparent explanation of the main observations. It is shown that the mechanism for drag reduction operates mainly on large scales. Understanding the mechanism allows us to predict how the amount of drag reduction depends on the various parameters in the model. The main conclusion is that drag reduction is not a universal phenomenon; it peaks in a window of parameters such as the Reynolds number and the relaxation rate of the polymer.

  10. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    PubMed

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  11. Generalized concentration addition: a method for examining mixtures containing partial agonists.

    PubMed

    Howard, Gregory J; Webster, Thomas F

    2009-08-07

    Environmentally relevant toxic exposures often consist of simultaneous exposure to multiple agents. Methods to predict the expected outcome of such combinations are critical both to risk assessment and to an accurate judgment of whether combinations are synergistic or antagonistic. Concentration addition (CA) has commonly been used to assess the presence of synergy or antagonism in combinations of similarly acting chemicals, and to predict effects of combinations of such agents. CA has the advantage of clear graphical interpretation: Curves of constant joint effect (isoboles) must be negatively sloped straight lines if the mixture is concentration additive. However, CA cannot be directly used to assess combinations that include partial agonists, although such agents are of considerable interest. Here, we propose a natural extension of CA to a functional form that may be applied to mixtures including full agonists and partial agonists. This extended definition, for which we suggest the term "generalized concentration addition," encompasses linear isoboles with slopes of any sign. We apply this approach to the simple example of agents with dose-response relationships described by Hill functions with slope parameter n=1. The resulting isoboles are in all cases linear, with negative, zero and positive slopes. Using simple mechanistic models of ligand-receptor systems, we show that the same isobole pattern and joint effects are generated by modeled combinations of full and partial agonists. Special cases include combinations of two full agonists and a full agonist plus a competitive antagonist.

  12. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  13. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  14. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  15. Untangling Slab Dynamics Using 3-D Numerical and Analytical Models

    NASA Astrophysics Data System (ADS)

    Holt, A. F.; Royden, L.; Becker, T. W.

    2016-12-01

    Increasingly sophisticated numerical models have enabled us to make significant strides in identifying the key controls on how subducting slabs deform. For example, 3-D models have demonstrated that subducting plate width, and the related strength of toroidal flow around the plate edge, exerts a strong control on both the curvature and the rate of migration of the trench. However, the results of numerical subduction models can be difficult to interpret, and many first order dynamics issues remain at least partially unresolved. Such issues include the dominant controls on trench migration, the interdependence of asthenospheric pressure and slab dynamics, and how nearby slabs influence each other's dynamics. We augment 3-D, dynamically evolving finite element models with simple, analytical force-balance models to distill the physics associated with subduction into more manageable parts. We demonstrate that for single, isolated subducting slabs much of the complexity of our fully numerical models can be encapsulated by simple analytical expressions. Rates of subduction and slab dip correlate strongly with the asthenospheric pressure difference across the subducting slab. For double subduction, an additional slab gives rise to more complex mantle pressure and flow fields, and significantly extends the range of plate kinematics (e.g., convergence rate, trench migration rate) beyond those present in single slab models. Despite these additional complexities, we show that much of the dynamics of such multi-slab systems can be understood using the physics illuminated by our single slab study, and that a force-balance method can be used to relate intra-plate stress to viscous pressure in the asthenosphere and coupling forces at plate boundaries. This method has promise for rapid modeling of large systems of subduction zones on a global scale.

  16. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  17. Implementation of the Realized Genomic Relationship Matrix to Open-Pollinated White Spruce Family Testing for Disentangling Additive from Nonadditive Genetic Effects

    PubMed Central

    Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.

    2016-01-01

    The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647

  18. Convective Detrainment and Control of the Tropical Water Vapor Distribution

    NASA Astrophysics Data System (ADS)

    Kursinski, E. R.; Rind, D.

    2006-12-01

    Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006

  19. Providing Additional Support for MNA by Including Quantitative Lines of Evidence for Abiotic Degradation and Co-metabolic Oxidation of Chlorinated Ethylenes

    DTIC Science & Technology

    2017-02-08

    cost benefit of the technology. 7.1 COST MODEL A simple cost model for the technology is presented so that a remediation professional can understand...reporting costs . The benefit of the qPCR analyses is that they allow the user to determine if aerobic cometabolism is possible. Because the PHE and...of Chlorinated Ethylenes February 2017 This document has been cleared for public release; Distribution Statement A Page Intentionally Left

  20. DNA nanosensor surface grafting and salt dependence

    NASA Astrophysics Data System (ADS)

    Carvalho, B. G.; Fagundes, J.; Martin, A. A.; Raniero, L.; Favero, P. P.

    2013-02-01

    In this paper we investigated the Paracoccidoides brasiliensis fungus nanosensor by simulations of simple strand DNA grafting on gold nanoparticle. In order to improve the knowledge of nanoparticle environment, the addiction of salt solution was studied at the models proposed by us. Nanoparticle and DNA are represented by economic models validated by us in this paper. In addition, the DNA grafting and salt influences are evaluated by adsorption and bond energies calculations. This theoretical evaluation gives support to experimental diagnostics techniques of diseases.

  1. Usage Automata

    NASA Astrophysics Data System (ADS)

    Bartoletti, Massimo

    Usage automata are an extension of finite stata automata, with some additional features (e.g. parameters and guards) that improve their expressivity. Usage automata are expressive enough to model security requirements of real-world applications; at the same time, they are simple enough to be statically amenable, e.g. they can be model-checked against abstractions of program usages. We study here some foundational aspects of usage automata. In particular, we discuss about their expressive power, and about their effective use in run-time mechanisms for enforcing usage policies.

  2. An interactive modelling tool for understanding hydrological processes in lowland catchments

    NASA Astrophysics Data System (ADS)

    Brauer, Claudia; Torfs, Paul; Uijlenhoet, Remko

    2016-04-01

    Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS), a rainfall-runoff model for catchments with shallow groundwater (Brauer et al., 2014ab). WALRUS explicitly simulates processes which are important in lowland catchments, such as feedbacks between saturated and unsaturated zone and between groundwater and surface water. WALRUS has a simple model structure and few parameters with physical connotations. Some default functions (which can be changed easily for research purposes) are implemented to facilitate application by practitioners and students. The effect of water management on hydrological variables can be simulated explicitly. The model description and applications are published in open access journals (Brauer et al, 2014). The open source code (provided as R package) and manual can be downloaded freely (www.github.com/ClaudiaBrauer/WALRUS). We organised a short course for Dutch water managers and consultants to become acquainted with WALRUS. We are now adapting this course as a stand-alone tutorial suitable for a varied, international audience. In addition, simple models can aid teachers to explain hydrological principles effectively. We used WALRUS to generate examples for simple interactive tools, which we will present at the EGU General Assembly. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geosci. Model Dev., 7, 2313-2332. C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrol. Earth Syst. Sci., 18, 4007-4028.

  3. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  4. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  5. A Neurocomputational Model of the Effect of Cognitive Load on Freezing of Gait in Parkinson's Disease.

    PubMed

    Muralidharan, Vignesh; Balasubramani, Pragathi P; Chakravarthy, V Srinivasa; Gilat, Moran; Lewis, Simon J G; Moustafa, Ahmed A

    2016-01-01

    Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease.

  6. A Neurocomputational Model of the Effect of Cognitive Load on Freezing of Gait in Parkinson's Disease

    PubMed Central

    Muralidharan, Vignesh; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa; Gilat, Moran; Lewis, Simon J. G.; Moustafa, Ahmed A.

    2017-01-01

    Experimental data show that perceptual cues can either exacerbate or ameliorate freezing of gait (FOG) in Parkinson's Disease (PD). For example, simple visual stimuli like stripes on the floor can alleviate freezing whereas complex stimuli like narrow doorways can trigger it. We present a computational model of the cognitive and motor cortico-basal ganglia loops that explains the effects of sensory and cognitive processes on FOG. The model simulates strong causative factors of FOG including decision conflict (a disagreement of various sensory stimuli in their association with a response) and cognitive load (complexity of coupling a stimulus with downstream mechanisms that control gait execution). Specifically, the model simulates gait of PD patients (freezers and non-freezers) as they navigate a series of doorways while simultaneously responding to several Stroop word cues in a virtual reality setup. The model is based on an actor-critic architecture of Reinforcement Learning involving Utility-based decision making, where Utility is a weighted sum of Value and Risk functions. The model accounts for the following experimental data: (a) the increased foot-step latency seen in relation to high conflict cues, (b) the high number of motor arrests seen in PD freezers when faced with a complex cue compared to the simple cue, and (c) the effect of dopamine medication on these motor arrests. The freezing behavior arises as a result of addition of task parameters (doorways and cues) and not due to inherent differences in the subject group. The model predicts a differential role of risk sensitivity in PD freezers and non-freezers in the cognitive and motor loops. Additionally this first-of-its-kind model provides a plausible framework for understanding the influence of cognition on automatic motor actions in controls and Parkinson's Disease. PMID:28119584

  7. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  8. Psychophysiological interaction between superior temporal gyrus (STG) and cerebellum: An fMRI study

    NASA Astrophysics Data System (ADS)

    Yusoff, A. N.; Teng, X. L.; Ng, S. B.; Hamid, A. I. A.; Mukari, S. Z. M.

    2016-03-01

    This study aimed to model the psychophysiological interaction (PPI) between the bilateral STG and cerebellum (lobule VI and lobule VII) during an arithmetic addition task. Eighteen young adults participated in this study. They were instructed to solve single-digit addition tasks in quiet and noisy backgrounds during an fMRI scan. Results showed that in both hemispheres, the response in the cerebellum was found to be linearly influenced by the activity in STG (vice-versa) for both in-quiet and in-noise conditions. However, the influence of the cerebellum on STG seemed to be modulated by noise. A two-way PPI model between STG and cerebellum is suggested. The connectivity between the two regions during a simple addition task in a noisy condition is modulated by the participants’ higher attention to perceive.

  9. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  10. Universal resilience patterns in cascading load model: More capacity is not always better

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Wang, Xue; Cai, Lin; Ni, Chengzhang; Xie, Wei; Xu, Bo

    We study the problem of universal resilience patterns in complex networks against cascading failures. We revise the classical betweenness method and overcome its limitation of quantifying the load in cascading model. Considering that the generated load by all nodes should be equal to the transported one by all edges in the whole network, we propose a new method to quantify the load on an edge and construct a simple cascading model. By attacking the edge with the highest load, we show that, if the flow between two nodes is transported along the shortest paths between them, then the resilience of some networks against cascading failures inversely decreases with the enhancement of the capacity of every edge, i.e. the more capacity is not always better. We also observe the abnormal fluctuation of the additional load that exceeds the capacity of each edge. By a simple graph, we analyze the propagation of cascading failures step by step, and give a reasonable explanation of the abnormal fluctuation of cascading dynamics.

  11. Pulsed Rabi oscillations in quantum two-level systems: beyond the area theorem

    NASA Astrophysics Data System (ADS)

    Fischer, Kevin A.; Hanschke, Lukas; Kremser, Malte; Finley, Jonathan J.; Müller, Kai; Vučković, Jelena

    2018-01-01

    The area theorem states that when a short optical pulse drives a quantum two-level system, it undergoes Rabi oscillations in the probability of scattering a single photon. In this work, we investigate the breakdown of the area theorem as both the pulse length becomes non-negligible and for certain pulse areas. Using simple quantum trajectories, we provide an analytic approximation to the photon emission dynamics of a two-level system. Our model provides an intuitive way to understand re-excitation, which elucidates the mechanism behind the two-photon emission events that can spoil single-photon emission. We experimentally measure the emission statistics from a semiconductor quantum dot, acting as a two-level system, and show good agreement with our simple model for short pulses. Additionally, the model clearly explains our recent results (Fischer and Hanschke 2017 et al Nat. Phys.) showing dominant two-photon emission from a two-level system for pulses with interaction areas equal to an even multiple of π.

  12. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  13. Research on Process Models of Basic Arithmetic Skills, Technical Report No. 303. Psychology and Education Series - Final Report.

    ERIC Educational Resources Information Center

    Suppes, Patrick; And Others

    This report presents a theory of eye movement that accounts for main features of the stochastic behavior of eye-fixation durations and direction of movement of saccades in the process of solving arithmetic exercises of addition and subtraction. The best-fitting distribution of fixation durations with a relatively simple theoretical justification…

  14. Collaboration using roles. [in computer network security

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1990-01-01

    Segregation of roles into alternative accounts is a model which provides not only the ability to collaborate but also enables accurate accounting of resources consumed by collaborative projects, protects the resources and objects of such a project, and does not introduce new security vulnerabilities. The implementation presented here does not require users to remember additional passwords and provides a very simple consistent interface.

  15. Additive schemes for certain operator-differential equations

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2010-12-01

    Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.

  16. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  17. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  18. Assessing the Performance of Computationally Simple and Complex Representations of Aerosol Processes using a Testbed Methodology

    NASA Astrophysics Data System (ADS)

    Fast, J. D.; Ma, P.; Easter, R. C.; Liu, X.; Zaveri, R. A.; Rasch, P.

    2012-12-01

    Predictions of aerosol radiative forcing in climate models still contain large uncertainties, resulting from a poor understanding of certain aerosol processes, the level of complexity of aerosol processes represented in models, and the ability of models to account for sub-grid scale variability of aerosols and processes affecting them. In addition, comparing the performance and computational efficiency of new aerosol process modules used in various studies is problematic because different studies often employ different grid configurations, meteorology, trace gas chemistry, and emissions that affect the temporal and spatial evolution of aerosols. To address this issue, we have developed an Aerosol Modeling Testbed (AMT) to systematically and objectively evaluate aerosol process modules. The AMT consists of the modular Weather Research and Forecasting (WRF) model, a series of testbed cases for which extensive in situ and remote sensing measurements of meteorological, trace gas, and aerosol properties are available, and a suite of tools to evaluate the performance of meteorological, chemical, aerosol process modules. WRF contains various parameterizations of meteorological, chemical, and aerosol processes and includes interactive aerosol-cloud-radiation treatments similar to those employed by climate models. In addition, the physics suite from a global climate model, Community Atmosphere Model version 5 (CAM5), has also been ported to WRF so that these parameterizations can be tested at various spatial scales and compared directly with field campaign data and other parameterizations commonly used by the mesoscale modeling community. In this study, we evaluate simple and complex treatments of the aerosol size distribution and secondary organic aerosols using the AMT and measurements collected during three field campaigns: the Megacities Initiative Local and Global Observations (MILAGRO) campaign conducted in the vicinity of Mexico City during March 2006, the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento California during June 2010, and the California Nexus (CalNex) campaign conducted in southern California during May and June of 2010. For the aerosol size distribution, we compare the predictions from the GOCART bulk aerosol model, the MADE/SORGAM modal aerosol model, the Modal Aerosol Model (MAM) employed by CAM5, and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) which uses a sectional representation. For secondary organic aerosols, we compare simple fixed mass yield approaches with the numerically complex volatility basis set approach. All simulations employ the same emissions, meteorology, trace gas chemistry (except for that involving condensable organic species), and initial and boundary conditions. Performance metrics from the AMT are used to assess performance in terms of simulated mass, composition, size distribution (except for GOCART), and aerosol optical properties in relation to computational expense. In addition to statistical measures, qualitative differences among the different aerosol models over the computational domain are presented to examine variations in how aerosols age among the aerosol models.

  19. Material Model Evaluation of a Composite Honeycomb Energy Absorber

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.

    2012-01-01

    A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.

  20. A-Priori Tuning of Modified Magnussen Combustion Model

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    2016-01-01

    In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.

  1. Vector-based model of elastic bonds for simulation of granular solids.

    PubMed

    Kuzkin, Vitaly A; Asonov, Igor E

    2012-11-01

    A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.

  2. A simple model for constant storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes nanocomposites at low frequencies assuming the properties of interphase regions and networks.

    PubMed

    Zare, Yasser; Rhim, Sungsoo; Garmabi, Hamid; Rhee, Kyong Yop

    2018-04-01

    The networks of nanoparticles in nanocomposites cause solid-like behavior demonstrating a constant storage modulus at low frequencies. This study examines the storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes (CNT) nanocomposites. The experimental data of the storage modulus in the plateau regions are obtained by a frequency sweep test. In addition, a simple model is developed to predict the constant storage modulus assuming the properties of the interphase regions and the CNT networks. The model calculations are compared with the experimental results, and the parametric analyses are applied to validate the predictability of the developed model. The calculations properly agree with the experimental data at all polymer and CNT concentrations. Moreover, all parameters acceptably modulate the constant storage modulus. The percentage of the networked CNT, the modulus of networks, and the thickness and modulus of the interphase regions directly govern the storage modulus of nanocomposites. The outputs reveal the important roles of the interphase properties in the storage modulus. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Prediction and measurements of vibrations from a railway track lying on a peaty ground

    NASA Astrophysics Data System (ADS)

    Picoux, B.; Rotinat, R.; Regoin, J. P.; Le Houédec, D.

    2003-10-01

    This paper introduces a two-dimensional model for the response of the ground surface due to vibrations generated by a railway traffic. A semi-analytical wave propagation model is introduced which is subjected to a set of harmonic moving loads and based on a calculation method of the dynamic stiffness matrix of the ground. In order to model a complete railway system, the effect of a simple track model is taken into account including rails, sleepers and ballast especially designed for the study of low vibration frequencies. The priority has been given to a simple formulation based on the principle of spatial Fourier transforms compatible with good numerical efficiency and yet providing quick solutions. In addition, in situ measurements for a soft soil near a railway track were carried out and will be used to validate the numerical implementation. The numerical and experimental results constitute a significant body of useful data to, on the one hand, characterize the response of the environment of tracks and, on the other hand, appreciate the importance of the speed and weight on the behaviour of the structure.

  4. Analytical solution for shear bands in cold-rolled 1018 steel

    NASA Astrophysics Data System (ADS)

    Voyiadjis, George Z.; Almasri, Amin H.; Faghihi, Danial; Palazotto, Anthony N.

    2012-06-01

    Cold-rolled 1018 (CR-1018) carbon steel has been well known for its susceptibility to adiabatic shear banding under dynamic loadings. Analysis of these localizations highly depends on the selection of the constitutive model. To deal with this issue, a constitutive model that takes temperature and strain rate effect into account is proposed. The model is motivated by two physical-based models: the Zerilli and Armstrong and the Voyiadjis and Abed models. This material model, however, incorporates a simple softening term that is capable of simulating the softening behavior of CR-1018 steel. Instability, localization, and evolution of adiabatic shear bands are discussed and presented graphically. In addition, the effect of hydrostatic pressure is illustrated.

  5. Phobic, panic, and major depressive disorders and the five-factor model of personality.

    PubMed

    Bienvenu, O J; Nestadt, G; Samuels, J F; Costa, P T; Howard, W T; Eaton, W W

    2001-03-01

    This study investigated five-factor model personality traits in anxiety (simple phobia, social phobia, agoraphobia, and panic disorder) and major depressive disorders in a population-based sample. In the Baltimore Epidemiologic Catchment Area Follow-up Study, psychiatrists administered the Schedules for Clinical Assessment in Neuropsychiatry to 333 adult subjects who also completed the Revised NEO Personality Inventory. All of the disorders except simple phobia were associated with high neuroticism. Social phobia and agoraphobia were associated with low extraversion. In addition, lower-order facets of extraversion, agreeableness, and conscientiousness were associated with certain disorders (i.e., low positive emotions in panic disorder; low trust and compliance in certain phobias; and low competence, achievement striving, and self-discipline in several disorders). This study emphasizes the utility of lower-order personality assessments and underscores the need for further research on personality/psychopathology etiologic relationships.

  6. Reflection of a polarized light cone

    NASA Astrophysics Data System (ADS)

    Brody, Jed; Weiss, Daniel; Berland, Keith

    2013-01-01

    We introduce a visually appealing experimental demonstration of Fresnel reflection. In this simple optical experiment, a polarized light beam travels through a high numerical-aperture microscope objective, reflects off a glass slide, and travels back through the same objective lens. The return beam is sampled with a polarizing beam splitter and produces a surprising geometric pattern on an observation screen. Understanding the origin of this pattern requires careful attention to geometry and an understanding of the Fresnel coefficients for S and P polarized light. We demonstrate that in addition to a relatively simple experimental implementation, the shape of the observed pattern can be computed both analytically and by using optical modeling software. The experience of working through complex mathematical computations and demonstrating their agreement with a surprising experimental observation makes this a highly educational experiment for undergraduate optics or advanced-lab courses. It also provides a straightforward yet non-trivial system for teaching students how to use optical modeling software.

  7. Detonation Product EOS Studies: Using ISLS to Refine Cheetah

    NASA Astrophysics Data System (ADS)

    Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.

    2002-07-01

    Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.

  8. Simple spatial scaling rules behind complex cities.

    PubMed

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  9. Mechano-genetic DNA hydrogels as a simple, reconstituted model to probe the effect of active fluctuations on gene transcription

    NASA Astrophysics Data System (ADS)

    Nguyen, Dan; Saleh, Omar

    Active fluctuations - non-directed fluctuations attributable, not to thermal energy, but to non-equilibrium processes - are thought to influence biology by increasing the diffusive motion of biomolecules. Dense DNA regions within cells (i.e. chromatin) are expected to exhibit such phenomena, as they are cross-linked networks that continually experience propagating forces arising from dynamic cellular activity. Additional agitation within these gene-encoding DNA networks could have potential genetic consequences. By changing the local mobility of transcriptional machinery and regulatory proteins towards/from their binding sites, and thereby influencing transcription rates, active fluctuations could prove to be a physical means of modulating gene expression. To begin probing this effect, we construct genetic DNA hydrogels, as a simple, reconstituted model of chromatin, and quantify transcriptional output from these hydrogels in the presence/absence of active fluctuations.

  10. Connections between survey calibration estimators and semiparametric models for incomplete data

    PubMed Central

    Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.

    2012-01-01

    Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390

  11. Model of ballistic targets' dynamics used for trajectory tracking algorithms

    NASA Astrophysics Data System (ADS)

    Okoń-FÄ fara, Marta; Kawalec, Adam; Witczak, Andrzej

    2017-04-01

    There are known only few ballistic object tracking algorithms. To develop such algorithms and to its further testing, it is necessary to implement possibly simple and reliable objects' dynamics model. The article presents the dynamics' model of a tactical ballistic missile (TBM) including the three stages of flight: the boost stage and two passive stages - the ascending one and the descending one. Additionally, the procedure of transformation from the local coordinate system to the polar-radar oriented and the global is presented. The prepared theoretical data may be used to determine the tracking algorithm parameters and to its further verification.

  12. Fatigue and damage tolerance scatter models

    NASA Astrophysics Data System (ADS)

    Raikher, Veniamin L.

    1994-09-01

    Effective Total Fatigue Life and Crack Growth Scatter Models are proposed. The first of them is based on the power form of the Wohler curve, fatigue scatter dependence on mean life value, cycle stress ratio influence on fatigue scatter, and validated description of the mean stress influence on the mean fatigue life. The second uses in addition are fracture mechanics approach, assumption of initial damage existence, and Paris equation. Simple formulas are derived for configurations of models. A preliminary identification of the parameters of the models is fulfilled on the basis of experimental data. Some new and important results for fatigue and crack growth scatter characteristics are obtained.

  13. A method for three-dimensional modeling of wind-shear environments for flight simulator applications

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1984-01-01

    A computational method for modeling severe wind shears of the type that have been documented during severe convective atmospheric conditions is offered for use in research and training flight simulation. The procedure was developed with the objectives of operational flexibility and minimum computer load. From one to five, simple down burst wind models can be configured and located to produce the wind field desired for specific simulated flight scenarios. A definition of related turbulence parameters is offered as an additional product of the computations. The use of the method to model several documented examples of severe wind shear is demonstrated.

  14. A simple homogeneous model for regular and irregular metallic wire media samples

    NASA Astrophysics Data System (ADS)

    Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.

    2018-02-01

    To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.

  15. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  16. Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.

    PubMed

    Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres

    2016-01-01

    We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.

  17. Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications

    PubMed Central

    Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres

    2016-01-01

    We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357

  18. A simple derivation for amplitude and time period of charged particles in an electrostatic bathtub potential

    NASA Astrophysics Data System (ADS)

    Prathap Reddy, K.

    2016-11-01

    An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.

  19. When Practice Doesn't Lead to Retrieval: An Analysis of Children's Errors with Simple Addition

    ERIC Educational Resources Information Center

    de Villiers, Celéste; Hopkins, Sarah

    2013-01-01

    Counting strategies initially used by young children to perform simple addition are often replaced by more efficient counting strategies, decomposition strategies and rule-based strategies until most answers are encoded in memory and can be directly retrieved. Practice is thought to be the key to developing fluent retrieval of addition facts. This…

  20. Lateral interactions and non-equilibrium in surface kinetics

    NASA Astrophysics Data System (ADS)

    Menzel, Dietrich

    2016-08-01

    Work modelling reactions between surface species frequently use Langmuir kinetics, assuming that the layer is in internal equilibrium, and that the chemical potential of adsorbates corresponds to that of an ideal gas. Coverage dependences of reacting species and of site blocking are usually treated with simple power law coverage dependences (linear in the simplest case), neglecting that lateral interactions are strong in adsorbate and co-adsorbate layers which may influence kinetics considerably. My research group has in the past investigated many co-adsorbate systems and simple reactions in them. We have collected a number of examples where strong deviations from simple coverage dependences exist, in blocking, promoting, and selecting reactions. Interactions can range from those between next neighbors to larger distances, and can be quite complex. In addition, internal equilibrium in the layer as well as equilibrium distributions over product degrees of freedom can be violated. The latter effect leads to non-equipartition of energy over molecular degrees of freedom (for products) or non-equal response to those of reactants. While such behavior can usually be described by dynamic or kinetic models, the deeper reasons require detailed theoretical analysis. Here, a selection of such cases is reviewed to exemplify these points.

  1. A scenario and forecast model for Gulf of Mexico hypoxic area and volume

    USGS Publications Warehouse

    Scavia, Donald; Evans, Mary Anne; Obenour, Daniel R.

    2013-01-01

    For almost three decades, the relative size of the hypoxic region on the Louisiana-Texas continental shelf has drawn scientific and policy attention. During that time, both simple and complex models have been used to explore hypoxia dynamics and to provide management guidance relating the size of the hypoxic zone to key drivers. Throughout much of that development, analyses had to accommodate an apparent change in hypoxic sensitivity to loads and often cull observations due to anomalous meteorological conditions. Here, we describe an adaptation of our earlier, simple biophysical model, calibrated to revised hypoxic area estimates and new hypoxic volume estimates through Bayesian estimation. This application eliminates the need to cull observations and provides revised hypoxic extent estimates with uncertainties, corresponding to different nutrient loading reduction scenarios. We compare guidance from this model application, suggesting an approximately 62% nutrient loading reduction is required to reduce Gulf hypoxia to the Action Plan goal of 5,000 km2, to that of previous applications. In addition, we describe for the first time, the corresponding response of hypoxic volume. We also analyze model results to test for increasing system sensitivity to hypoxia formation, but find no strong evidence of such change.

  2. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  3. Effect of misspecification of gene frequency on the two-point LOD score.

    PubMed

    Pal, D K; Durner, M; Greenberg, D A

    2001-11-01

    In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.

  4. Numerical model for the thermal behavior of thermocline storage tanks

    NASA Astrophysics Data System (ADS)

    Ehtiwesh, Ismael A. S.; Sousa, Antonio C. M.

    2018-03-01

    Energy storage is a critical factor in the advancement of solar thermal power systems for the sustained delivery of electricity. In addition, the incorporation of thermal energy storage into the operation of concentrated solar power systems (CSPs) offers the potential of delivering electricity without fossil-fuel backup even during peak demand, independent of weather conditions and daylight. Despite this potential, some areas of the design and performance of thermocline systems still require further attention for future incorporation in commercial CSPs, particularly, their operation and control. Therefore, the present study aims to develop a simple but efficient numerical model to allow the comprehensive analysis of thermocline storage systems aiming better understanding of their dynamic temperature response. The validation results, despite the simplifying assumptions of the numerical model, agree well with the experiments for the time evolution of the thermocline region. Three different cases are considered to test the versatility of the numerical model; for the particular type of a storage tank with top round impingement inlet, a simple analytical model was developed to take into consideration the increased turbulence level in the mixing region. The numerical predictions for the three cases are in general good agreement against the experimental results.

  5. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  6. Modelling stream aquifer seepage in an alluvial aquifer: an improved loosing-stream package for MODFLOW

    NASA Astrophysics Data System (ADS)

    Osman, Yassin Z.; Bruen, Michael P.

    2002-07-01

    Seepage from a stream, which partially penetrates an unconfined alluvial aquifer, is studied for the case when the water table falls below the streambed level. Inadequacies are identified in current modelling approaches to this situation. A simple and improved method of incorporating such seepage into groundwater models is presented. This considers the effect on seepage flow of suction in the unsaturated part of the aquifer below a disconnected stream and allows for the variation of seepage with water table fluctuations. The suggested technique is incorporated into the saturated code MODFLOW and is tested by comparing its predictions with those of a widely used variably saturated model, SWMS_2D simulating water flow and solute transport in two-dimensional variably saturated media. Comparisons are made of both seepage flows and local mounding of the water table. The suggested technique compares very well with the results of variably saturated model simulations. Most currently used approaches are shown to underestimate the seepage and associated local water table mounding, sometimes substantially. The proposed method is simple, easy to implement and requires only a small amount of additional data about the aquifer hydraulic properties.

  7. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  8. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  9. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  10. PyFolding: Open-Source Graphing, Simulation, and Analysis of the Biophysical Properties of Proteins.

    PubMed

    Lowe, Alan R; Perez-Riba, Albert; Itzhaki, Laura S; Main, Ewan R G

    2018-02-06

    For many years, curve-fitting software has been heavily utilized to fit simple models to various types of biophysical data. Although such software packages are easy to use for simple functions, they are often expensive and present substantial impediments to applying more complex models or for the analysis of large data sets. One field that is reliant on such data analysis is the thermodynamics and kinetics of protein folding. Over the past decade, increasingly sophisticated analytical models have been generated, but without simple tools to enable routine analysis. Consequently, users have needed to generate their own tools or otherwise find willing collaborators. Here we present PyFolding, a free, open-source, and extensible Python framework for graphing, analysis, and simulation of the biophysical properties of proteins. To demonstrate the utility of PyFolding, we have used it to analyze and model experimental protein folding and thermodynamic data. Examples include: 1) multiphase kinetic folding fitted to linked equations, 2) global fitting of multiple data sets, and 3) analysis of repeat protein thermodynamics with Ising model variants. Moreover, we demonstrate how PyFolding is easily extensible to novel functionality beyond applications in protein folding via the addition of new models. Example scripts to perform these and other operations are supplied with the software, and we encourage users to contribute notebooks and models to create a community resource. Finally, we show that PyFolding can be used in conjunction with Jupyter notebooks as an easy way to share methods and analysis for publication and among research teams. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  11. Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.

    PubMed

    Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J

    2015-02-01

    The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Simple and efficient self-healing strategy for damaged complex networks

    NASA Astrophysics Data System (ADS)

    Gallos, Lazaros K.; Fefferman, Nina H.

    2015-11-01

    The process of destroying a complex network through node removal has been the subject of extensive interest and research. Node loss typically leaves the network disintegrated into many small and isolated clusters. Here we show that these clusters typically remain close to each other and we suggest a simple algorithm that is able to reverse the inflicted damage by restoring the network's functionality. After damage, each node decides independently whether to create a new link depending on the fraction of neighbors it has lost. In addition to relying only on local information, where nodes do not need knowledge of the global network status, we impose the additional constraint that new links should be as short as possible (i.e., that the new edge completes a shortest possible new cycle). We demonstrate that this self-healing method operates very efficiently, both in model and real networks. For example, after removing the most connected airports in the USA, the self-healing algorithm rejoined almost 90% of the surviving airports.

  13. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  14. Identification of cracks in thick beams with a cracked beam element model

    NASA Astrophysics Data System (ADS)

    Hou, Chuanchuan; Lu, Yong

    2016-12-01

    The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.

  15. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  16. Coupling of rainfall-induced landslide triggering model with predictions of debris flow runout distances

    NASA Astrophysics Data System (ADS)

    Lehmann, Peter; von Ruette, Jonas; Fan, Linfeng; Or, Dani

    2014-05-01

    Rapid debris flows initiated by rainfall induced shallow landslides present a highly destructive natural hazard in steep terrain. The impact and run-out paths of debris flows depend on the volume, composition and initiation zone of released material and are requirements to make accurate debris flow predictions and hazard maps. For that purpose we couple the mechanistic 'Catchment-scale Hydro-mechanical Landslide Triggering (CHLT)' model to compute timing, location, and landslide volume with simple approaches to estimate debris flow runout distances. The runout models were tested using two landslide inventories obtained in the Swiss Alps following prolonged rainfall events. The predicted runout distances were in good agreement with observations, confirming the utility of such simple models for landscape scale estimates. In a next step debris flow paths were computed for landslides predicted with the CHLT model for a certain range of soil properties to explore its effect on runout distances. This combined approach offers a more complete spatial picture of shallow landslide and subsequent debris flow hazards. The additional information provided by CHLT model concerning location, shape, soil type and water content of the released mass may also be incorporated into more advanced models of runout to improve predictability and impact of such abruptly-released mass.

  17. An eco-hydrologic model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-03-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission and their consideration alongside climatic datasets. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear eco-hydrologic model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  18. An ecohydrological model of malaria outbreaks

    NASA Astrophysics Data System (ADS)

    Montosi, E.; Manzoni, S.; Porporato, A.; Montanari, A.

    2012-08-01

    Malaria is a geographically widespread infectious disease that is well known to be affected by climate variability at both seasonal and interannual timescales. In an effort to identify climatic factors that impact malaria dynamics, there has been considerable research focused on the development of appropriate disease models for malaria transmission driven by climatic time series. These analyses have focused largely on variation in temperature and rainfall as direct climatic drivers of malaria dynamics. Here, we further these efforts by considering additionally the role that soil water content may play in driving malaria incidence. Specifically, we hypothesize that hydro-climatic variability should be an important factor in controlling the availability of mosquito habitats, thereby governing mosquito growth rates. To test this hypothesis, we reduce a nonlinear ecohydrological model to a simple linear model through a series of consecutive assumptions and apply this model to malaria incidence data from three South African provinces. Despite the assumptions made in the reduction of the model, we show that soil water content can account for a significant portion of malaria's case variability beyond its seasonal patterns, whereas neither temperature nor rainfall alone can do so. Future work should therefore consider soil water content as a simple and computable variable for incorporation into climate-driven disease models of malaria and other vector-borne infectious diseases.

  19. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants

    PubMed Central

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-01-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier–Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. PMID:24664988

  20. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  1. Competition of simple and complex adoption on interdependent networks

    NASA Astrophysics Data System (ADS)

    Czaplicka, Agnieszka; Toral, Raul; San Miguel, Maxi

    2016-12-01

    We consider the competition of two mechanisms for adoption processes: a so-called complex threshold dynamics and a simple susceptible-infected-susceptible (SIS) model. Separately, these mechanisms lead, respectively, to first-order and continuous transitions between nonadoption and adoption phases. We consider two interconnected layers. While all nodes on the first layer follow the complex adoption process, all nodes on the second layer follow the simple adoption process. Coupling between the two adoption processes occurs as a result of the inclusion of some additional interconnections between layers. We find that the transition points and also the nature of the transitions are modified in the coupled dynamics. In the complex adoption layer, the critical threshold required for extension of adoption increases with interlayer connectivity whereas in the case of an isolated single network it would decrease with average connectivity. In addition, the transition can become continuous depending on the detailed interlayer and intralayer connectivities. In the SIS layer, any interlayer connectivity leads to the extension of the adopter phase. Besides, a new transition appears as a sudden drop of the fraction of adopters in the SIS layer. The main numerical findings are described by a mean-field type analytical approach appropriately developed for the threshold-SIS coupled system.

  2. Tubular growth and bead formation in the lyotropic lamellar phase of a lipid.

    PubMed

    Bhatia, Tripta; Hatwalne, Yashodhan; Madhusudana, N V

    2015-07-28

    We use fluorescence confocal polarised microscopy (FCPM) to study tubular growth upon hydration of dry DOPC (1,2-dioleoyl-sn-glycero-3-phosphocholine) in water and water-glycerol mixtures. We have developed a model to relate the FCPM intensity profiles to the multilamellar structures of the tubules. Insertion of an additional patch inside a tubule produces a beaded structure, while a straight configuration is retained if the growth is on the outside. We use a simple model to suggest that reduction in overall curvature energy drives bead formation.

  3. Excess Claims and Data Trimming in the Context of Credibility Rating Procedures,

    DTIC Science & Technology

    1981-11-01

    Triining in the Context of Credibility Rating Procedures by Hans BShlmann, Alois Gisler, William S. Jewell* 1. Motivation In Ratemaking and in Experience...work on the ETH computer. __.1: " Zen * ’ ’ II / -2- 2. The Basic Model Throughout the paper we work with the most simple model in the credibility...additional structure are summed up by stating that the density -3- f 8 (x) has the following form 1) fe(x) -(1-r)po (x/e) + rape(x) 3. The Basic Problem As

  4. Minimal Unified Resolution to R_{K^{(*)}} and R(D^{(*)}) Anomalies with Lepton Mixing.

    PubMed

    Choudhury, Debajyoti; Kundu, Anirban; Mandal, Rusa; Sinha, Rahul

    2017-10-13

    It is a challenging task to explain, in terms of a simple and compelling new physics scenario, the intriguing discrepancies between the standard model expectations and the data for the neutral-current observables R_{K} and R_{K^{*}}, as well as the charged-current observables R(D) and R(D^{*}). We show that this can be achieved in an effective theory with only two unknown parameters. In addition, this class of models predicts some interesting signatures in the context of both B decays as well as high-energy collisions.

  5. Trajectory fitting in function space with application to analytic modeling of surfaces

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.

    1992-01-01

    A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.

  6. Analytics For Distracted Driver Behavior Modeling in Dilemma Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jan-Mou; Malikopoulos, Andreas; Thakur, Gautam

    2014-01-01

    In this paper, we present the results obtained and insights gained through the analysis of TRB contest data. We used exploratory analysis, regression, and clustering models for gaining insights into the driver behavior in a dilemma zone while driving under distraction. While simple exploratory analysis showed the distinguishing driver behavior patterns among different popu- lation groups in the dilemma zone, regression analysis showed statically signification relationships between groups of variables. In addition to analyzing the contest data, we have also looked into the possible impact of distracted driving on the fuel economy.

  7. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    NASA Astrophysics Data System (ADS)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  8. A Self-consistent Cloud Model for Brown Dwarfs and Young Giant Exoplanets: Comparison with Photometric and Spectroscopic Observations

    NASA Astrophysics Data System (ADS)

    Charnay, B.; Bézard, B.; Baudino, J.-L.; Bonnefoy, M.; Boccaletti, A.; Galicher, R.

    2018-02-01

    We developed a simple, physical, and self-consistent cloud model for brown dwarfs and young giant exoplanets. We compared different parametrizations for the cloud particle size, by fixing either particle radii or the mixing efficiency (parameter f sed), or by estimating particle radii from simple microphysics. The cloud scheme with simple microphysics appears to be the best parametrization by successfully reproducing the observed photometry and spectra of brown dwarfs and young giant exoplanets. In particular, it reproduces the L–T transition, due to the condensation of silicate and iron clouds below the visible/near-IR photosphere. It also reproduces the reddening observed for low-gravity objects, due to an increase of cloud optical depth for low gravity. In addition, we found that the cloud greenhouse effect shifts chemical equilibrium, increasing the abundances of species stable at high temperature. This effect should significantly contribute to the strong variation of methane abundance at the L–T transition and to the methane depletion observed on young exoplanets. Finally, we predict the existence of a continuum of brown dwarfs and exoplanets for absolute J magnitude = 15–18 and J-K color = 0–3, due to the evolution of the L–T transition with gravity. This self-consistent model therefore provides a general framework to understand the effects of clouds and appears well-suited for atmospheric retrievals.

  9. 5 CFR 1315.17 - Formulas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Daily simple interest formula. (1) To calculate daily simple interest the following formula may be used... a payment is due on April 1 and the payment is not made until April 11, a simple interest... equation calculates simple interest on any additional days beyond a monthly increment. (3) For example, if...

  10. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  11. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  12. Analysis of nitrogen cycling in a forest stream during autumn using a 15N-tracer addition

    Treesearch

    Jennifer L. Tank; Judy L. Meyer; Diane M. Sanzone; Patrick J. Mulholland; Jackson R. Webster; Bruce J. Peterson; Wilfred M. Wollheim; Norman E. Leonard

    2000-01-01

    We added l5NH4Cl over 6 weeks to Upper Ball Creek, a second-order deciduous forest stream in the Appalachian Mountains, to follow the uptake, spiraling, and fate of nitrogen in a stream food web during autumn. A priori predictions of N flow and retention were made using a simple food web mass balance model. Values of ...

  13. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  14. Technical Report for the Period 1 October 1987 - 30 September 1989

    DTIC Science & Technology

    1990-03-01

    low pass filter results. -dt dt specifies the sampling rate in seconds. -gin specifies .w file (binary waveform data) input. - gout specifies .w file...waves arriving at moderate incidence angles, * high signal-to-noise ratio (SNR). The following assumptions are made, for simplicity* * additive...spatially uncorrelated noise, * simple signal model, free of refraction and scattering effects. This study is limited to the case of a plane incident P

  15. Mixed ice accretion on aircraft wings

    NASA Astrophysics Data System (ADS)

    Janjua, Zaid A.; Turnbull, Barbara; Hibberd, Stephen; Choi, Kwing-So

    2018-02-01

    Ice accretion is a problematic natural phenomenon that affects a wide range of engineering applications including power cables, radio masts, and wind turbines. Accretion on aircraft wings occurs when supercooled water droplets freeze instantaneously on impact to form rime ice or runback as water along the wing to form glaze ice. Most models to date have ignored the accretion of mixed ice, which is a combination of rime and glaze. A parameter we term the "freezing fraction" is defined as the fraction of a supercooled droplet that freezes on impact with the top surface of the accretion ice to explore the concept of mixed ice accretion. Additionally we consider different "packing densities" of rime ice, mimicking the different bulk rime densities observed in nature. Ice accretion is considered in four stages: rime, primary mixed, secondary mixed, and glaze ice. Predictions match with existing models and experimental data in the limiting rime and glaze cases. The mixed ice formulation however provides additional insight into the composition of the overall ice structure, which ultimately influences adhesion and ice thickness, and shows that for similar atmospheric parameter ranges, this simple mixed ice description leads to very different accretion rates. A simple one-dimensional energy balance was solved to show how this freezing fraction parameter increases with decrease in atmospheric temperature, with lower freezing fraction promoting glaze ice accretion.

  16. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  17. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  18. Graphene oxide caged in cellulose microbeads for removal of malachite green dye from aqueous solution.

    PubMed

    Zhang, Xiaomei; Yu, Hongwen; Yang, Hongjun; Wan, Yuchun; Hu, Hong; Zhai, Zhuang; Qin, Jieming

    2015-01-01

    A simple sol-gel method using non-toxic and cost-effective precursors has been developed to prepare graphene oxide (GO)/cellulose bead (GOCB) composites for removal of dye pollutants. Taking advantage of the combined benefits of GO and cellulose, the prepared GOCB composites exhibit excellent removal efficiency towards malachite green (>96%) and can be reused for over 5 times through simple filtration method. The high-decontamination performance of the GOCB system is strongly dependent on encapsulation amount of GO, temperature and pH value. In addition, the adsorption behavior of this new adsorbent fits well with the Langmuir isotherm and pseudo-second-order kinetic model. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Simple animal models for amyotrophic lateral sclerosis drug discovery.

    PubMed

    Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre

    2016-08-01

    Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.

  20. Long-term Evaluation of Landuse Changes On Landscape Water Balance - A Case Study From North-east Germany

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    In this paper, long-term effects of different afforestation scenarios on landscape wa- ter balance will be analyzed taking into account the results of a regional case study. This analysis is based on using a GIS-coupled simulation model for the the spatially distributed calculation of water balance.For this purpose, the modelling system THE- SEUS with a simple GIS-interface will be used. To take into account the special case of change in forest cover proportion, THESEUS was enhanced with a simple for- est growth model. In the regional case study, model runs will be performed using a detailed spatial data set from North-East Germany. This data set covers a mesoscale catchment located at the moraine landscape of North-East Germany. Based on this data set, the influence of the actual landuse and of different landuse change scenarios on water balance dynamics will be investigated taking into account the spatial distributed modelling results from THESEUS. The model was tested using different experimen- tal data sets from field plots as well as obsverded catchment discharge. Additionally to such convential validation techniques, remote sensing data were used to check the simulated regional distribution of water balance components like evapotranspiration in the catchment.

  1. A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof

    NASA Astrophysics Data System (ADS)

    Sinha, Ashok

    2016-03-01

    An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.

  2. Liquid part of the phase diagram and percolation line for two-dimensional Mercedes-Benz water.

    PubMed

    Urbic, T

    2017-09-01

    Monte Carlo simulations and Wertheim's thermodynamic perturbation theory (TPT) are used to predict the phase diagram and percolation curve for the simple two-dimensional Mercedes-Benz (MB) model of water. The MB model of water is quite popular for explaining water properties, but the phase diagram has not been reported till now. In the MB model, water molecules are modeled as two-dimensional Lennard-Jones disks, with three orientation-dependent hydrogen-bonding arms, arranged as in the MB logo. The liquid part of the phase space is explored using grand canonical Monte Carlo simulations and two versions of Wertheim's TPT for associative fluids, which have been used before to predict the properties of the simple MB model. We find that the theory reproduces well the physical properties of hot water but is less successful at capturing the more structured hydrogen bonding that occurs in cold water. In addition to reporting the phase diagram and percolation curve of the model, it is shown that the improved TPT predicts the phase diagram rather well, while the standard one predicts a phase transition at lower temperatures. For the percolation line, both versions have problems predicting the correct position of the line at high temperatures.

  3. Liquid part of the phase diagram and percolation line for two-dimensional Mercedes-Benz water

    NASA Astrophysics Data System (ADS)

    Urbic, T.

    2017-09-01

    Monte Carlo simulations and Wertheim's thermodynamic perturbation theory (TPT) are used to predict the phase diagram and percolation curve for the simple two-dimensional Mercedes-Benz (MB) model of water. The MB model of water is quite popular for explaining water properties, but the phase diagram has not been reported till now. In the MB model, water molecules are modeled as two-dimensional Lennard-Jones disks, with three orientation-dependent hydrogen-bonding arms, arranged as in the MB logo. The liquid part of the phase space is explored using grand canonical Monte Carlo simulations and two versions of Wertheim's TPT for associative fluids, which have been used before to predict the properties of the simple MB model. We find that the theory reproduces well the physical properties of hot water but is less successful at capturing the more structured hydrogen bonding that occurs in cold water. In addition to reporting the phase diagram and percolation curve of the model, it is shown that the improved TPT predicts the phase diagram rather well, while the standard one predicts a phase transition at lower temperatures. For the percolation line, both versions have problems predicting the correct position of the line at high temperatures.

  4. Influence of Water Saturation on Thermal Conductivity in Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2009-04-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  5. Dependence of Thermal Conductivity on Water Saturation of Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2008-12-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  6. Creep and stress relaxation modeling of polycrystalline ceramic fibers

    NASA Technical Reports Server (NTRS)

    Dicarlo, James A.; Morscher, Gregory N.

    1994-01-01

    A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.

  7. Creep and stress relaxation modeling of polycrystalline ceramic fibers

    NASA Technical Reports Server (NTRS)

    Dicarlo, James A.; Morscher, Gregory N.

    1991-01-01

    A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.

  8. Disease induction by human microbial pathogens in plant-model systems: potential, problems and prospects.

    PubMed

    van Baarlen, Peter; van Belkum, Alex; Thomma, Bart P H J

    2007-02-01

    Relatively simple eukaryotic model organisms such as the genetic model weed plant Arabidopsis thaliana possess an innate immune system that shares important similarities with its mammalian counterpart. In fact, some human pathogens infect Arabidopsis and cause overt disease with human symptomology. In such cases, decisive elements of the plant's immune system are likely to be targeted by the same microbial factors that are necessary for causing disease in humans. These similarities can be exploited to identify elementary microbial pathogenicity factors and their corresponding targets in a green host. This circumvents important cost aspects that often frustrate studies in humans or animal models and, in addition, results in facile ethical clearance.

  9. Nonequilibrium thermodynamics of the shear-transformation-zone model

    NASA Astrophysics Data System (ADS)

    Luo, Alan M.; Ã-ttinger, Hans Christian

    2014-02-01

    The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.

  10. Non-additive simple potentials for pre-programmed self-assembly

    NASA Astrophysics Data System (ADS)

    Mendoza, Carlos

    2015-03-01

    A major goal in nanoscience and nanotechnology is the self-assembly of any desired complex structure with a system of particles interacting through simple potentials. To achieve this objective, intense experimental and theoretical efforts are currently concentrated in the development of the so called ``patchy'' particles. Here we follow a completely different approach and introduce a very accessible model to produce a large variety of pre-programmed two-dimensional (2D) complex structures. Our model consists of a binary mixture of particles that interact through isotropic interactions that is able to self-assemble into targeted lattices by the appropriate choice of a small number of geometrical parameters and interaction strengths. We study the system using Monte Carlo computer simulations and, despite its simplicity, we are able to self assemble potentially useful structures such as chains, stripes, Kagomé, twisted Kagomé, honeycomb, square, Archimedean and quasicrystalline tilings. Our model is designed such that it may be implemented using discotic particles or, alternatively, using exclusively spherical particles interacting isotropically. Thus, it represents a promising strategy for bottom-up nano-fabrication. Partial Financial Support: DGAPA IN-110613.

  11. Experimental study of the oscillation of spheres in an acoustic levitator.

    PubMed

    Andrade, Marco A B; Pérez, Nicolás; Adamowski, Julio C

    2014-10-01

    The spontaneous oscillation of solid spheres in a single-axis acoustic levitator is experimentally investigated by using a high speed camera to record the position of the levitated sphere as a function of time. The oscillations in the axial and radial directions are systematically studied by changing the sphere density and the acoustic pressure amplitude. In order to interpret the experimental results, a simple model based on a spring-mass system is applied in the analysis of the sphere oscillatory behavior. This model requires the knowledge of the acoustic pressure distribution, which was obtained numerically by using a linear finite element method (FEM). Additionally, the linear acoustic pressure distribution obtained by FEM was compared with that measured with a laser Doppler vibrometer. The comparison between numerical and experimental pressure distributions shows good agreement for low values of pressure amplitude. When the pressure amplitude is increased, the acoustic pressure distribution becomes nonlinear, producing harmonics of the fundamental frequency. The experimental results of the spheres oscillations for low pressure amplitudes are consistent with the results predicted by the simple model based on a spring-mass system.

  12. Unusual dynamics of extinction in a simple ecological model.

    PubMed Central

    Sinha, S; Parthasarathy, S

    1996-01-01

    Studies on natural populations and harvesting biological resources have led to the view, commonly held, that (i) populations exhibiting chaotic oscillations run a high risk of extinction; and (ii) a decrease in emigration/exploitation may reduce the risk of extinction. Here we describe a simple ecological model with emigration/depletion that shows behavior in contrast to this. This model displays unusual dynamics of extinction and survival, where populations growing beyond a critical rate can persist within a band of high depletion rates, whereas extinction occurs for lower depletion rates. Though prior to extinction at lower depletion rates the population exhibits chaotic dynamics with large amplitudes of variation and very low minima, at higher depletion rates the population persists at chaos but with reduced variation and increased minima. For still higher values, within the band of persistence, the dynamics show period reversal leading to stability. These results illustrate that chaos does not necessarily lead to population extinction. In addition, the persistence of populations at high depletion rates has important implications in the considerations of strategies for the management of biological resources. PMID:8643661

  13. A Simple Analytical Model for Predicting the Detectable Ion Current in Ion Mobility Spectrometry Using Corona Discharge Ionization Sources

    NASA Astrophysics Data System (ADS)

    Kirk, Ansgar Thomas; Kobelt, Tim; Spehlbrink, Hauke; Zimmermann, Stefan

    2018-05-01

    Corona discharge ionization sources are often used in ion mobility spectrometers (IMS) when a non-radioactive ion source with high ion currents is required. Typically, the corona discharge is followed by a reaction region where analyte ions are formed from the reactant ions. In this work, we present a simple yet sufficiently accurate model for predicting the ion current available at the end of this reaction region when operating at reduced pressure as in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) or most IMS-MS instruments. It yields excellent qualitative agreement with measurement results and is even able to calculate the ion current within an error of 15%. Additional interesting findings of this model are the ion current at the end of the reaction region being independent from the ion current generated by the corona discharge and the ion current in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) growing quadratically when scaling down the length of the reaction region. [Figure not available: see fulltext.

  14. A Simple Analytical Model for Predicting the Detectable Ion Current in Ion Mobility Spectrometry Using Corona Discharge Ionization Sources.

    PubMed

    Kirk, Ansgar Thomas; Kobelt, Tim; Spehlbrink, Hauke; Zimmermann, Stefan

    2018-05-08

    Corona discharge ionization sources are often used in ion mobility spectrometers (IMS) when a non-radioactive ion source with high ion currents is required. Typically, the corona discharge is followed by a reaction region where analyte ions are formed from the reactant ions. In this work, we present a simple yet sufficiently accurate model for predicting the ion current available at the end of this reaction region when operating at reduced pressure as in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) or most IMS-MS instruments. It yields excellent qualitative agreement with measurement results and is even able to calculate the ion current within an error of 15%. Additional interesting findings of this model are the ion current at the end of the reaction region being independent from the ion current generated by the corona discharge and the ion current in High Kinetic Energy Ion Mobility Spectrometers (HiKE-IMS) growing quadratically when scaling down the length of the reaction region. Graphical Abstract ᅟ.

  15. A mean spherical model for soft potentials: The hard core revealed as a perturbation

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Y.; Ashcroft, N. W.

    1978-01-01

    The mean spherical approximation for fluids is extended to treat the case of dense systems interacting via soft-potentials. The extension takes the form of a generalized statement concerning the behavior of the direct correlation function c(r) and radial distribution g(r). From a detailed analysis that views the hard core portion of a potential as a perturbation on the whole, a specific model is proposed which possesses analytic solutions for both Coulomb and Yukawa potentials, in addition to certain other remarkable properties. A variational principle for the model leads to a relatively simple method for obtaining numerical solutions.

  16. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  17. Predictions from a flavour GUT model combined with a SUSY breaking sector

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  18. Further Development of Verification Check-Cases for Six- Degree-of-Freedom Flight Vehicle Simulations

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Madden, Michael M.; Shelton, Robert; Jackson, A. A.; Castro, Manuel P.; Noble, Deleena M.; Zimmerman, Curtis J.; Shidner, Jeremy D.; White, Joseph P.; Dutta, Doumyo; hide

    2015-01-01

    This follow-on paper describes the principal methods of implementing, and documents the results of exercising, a set of six-degree-of-freedom rigid-body equations of motion and planetary geodetic, gravitation and atmospheric models for simple vehicles in a variety of endo- and exo-atmospheric conditions with various NASA, and one popular open-source, engineering simulation tools. This effort is intended to provide an additional means of verification of flight simulations. The models used in this comparison, as well as the resulting time-history trajectory data, are available electronically for persons and organizations wishing to compare their flight simulation implementations of the same models.

  19. Applications of Perron-Frobenius theory to population dynamics.

    PubMed

    Li, Chi-Kwong; Schneider, Hans

    2002-05-01

    By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.

  20. Chemical control of rate and onset temperature of nadimide polymerization

    NASA Technical Reports Server (NTRS)

    Lauver, R. W.

    1985-01-01

    The chemistry of norbornenyl capped imide compounds (nadimides) is briefly reviewed with emphasis on the contribution of Diels-Alder reversion in controlling the rate and onset of the thermal polymerization reaction. Control of onset temperature of the cure exotherm by adjusting the concentration of maleimide is demonstrated using selected model compounds. The effects of nitrophenyl compounds as free radical retarders on nadimide reactivity are discussed. A simple copolymerization model is proposed for the overall nadimide cure reaction. An approximate numerical analysis is carried out to demonstrate the ability of the model to simulate the trends observed for both maleimide and nitrophenyl additions.

  1. A General Interface Method for Aeroelastic Analysis of Aircraft

    NASA Technical Reports Server (NTRS)

    Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.

    1996-01-01

    The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.

  2. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    PubMed

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  3. Responses to atmospheric CO2 concentrations in crop simulation models: a review of current simple and semicomplex representations and options for model development.

    PubMed

    Vanuytrecht, Eline; Thorburn, Peter J

    2017-05-01

    Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.

  4. From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience.

    PubMed

    Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded

    2017-07-01

    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Fully Resolved Simulations of 3D Printing

    NASA Astrophysics Data System (ADS)

    Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai

    2017-11-01

    Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to ``print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the ``ground truth'' for reduced order models is discussed.

  6. A simple model of circadian rhythms based on dimerization and proteolysis of PER and TIM

    PubMed Central

    Tyson, JJ; Hong, CI; Thron, CD; Novak, B

    1999-01-01

    Many organisms display rhythms of physiology and behavior that are entrained to the 24-h cycle of light and darkness prevailing on Earth. Under constant conditions of illumination and temperature, these internal biological rhythms persist with a period close to 1 day ("circadian"), but it is usually not exactly 24 h. Recent discoveries have uncovered stunning similarities among the molecular circuitries of circadian clocks in mice, fruit flies, and bread molds. A consensus picture is coming into focus around two proteins (called PER and TIM in fruit flies), which dimerize and then inhibit transcription of their own genes. Although this picture seems to confirm a venerable model of circadian rhythms based on time-delayed negative feedback, we suggest that just as crucial to the circadian oscillator is a positive feedback loop based on stabilization of PER upon dimerization. These ideas can be expressed in simple mathematical form (phase plane portraits), and the model accounts naturally for several hallmarks of circadian rhythms, including temperature compensation and the per(L) mutant phenotype. In addition, the model suggests how an endogenous circadian oscillator could have evolved from a more primitive, light-activated switch. PMID:20540926

  7. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  8. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  9. Role of phonons in the metal-insulator phase transition.

    NASA Technical Reports Server (NTRS)

    Langer, W. D.

    1972-01-01

    Review, for the transition series oxides, of the Mattis and Lander model, which is one of electrons interacting with lattice vibrations (electron and phonon interaction). The model displays superconducting, insulating, and metallic phases. Its basic properties evolve from a finite crystallographic distortion associated with a dominant phonon mode and the splitting of the Brillouin zone into two subzones, a property of simple cubic and body centered cubic lattices. The order of the metal-insulator phase transition is examined. The basic model has a second-order phase transition and the effects of additional mechanisms on the model are calculated. The way in which these mechanisms affect the magnetically ordered transition series oxides as described by the Hubbard model is discussed.

  10. The MAM rodent model of schizophrenia

    PubMed Central

    Lodge, Daniel J.

    2013-01-01

    Rodent models of human disease are essential to obtain a better understanding of disease pathology, the mechanism of action underlying conventional treatments, as well as for the generation of novel therapeutic approaches. There are a number of rodent models of schizophrenia based on either genetic manipulations, acute or sub-chronic drug administration, or developmental disturbances. The prenatal methylazoxymethanol acetate (MAM) rodent model is a developmental disruption model gaining increased attention because it displays a number of histological, neurophysiological and behavioral deficits analogous to those observed in schizophrenia patients. This unit describes the procedures required to safely induce the MAM phenotype in rats. In addition, we describe a simple behavioral procedure, amphetamine-induced hyper-locomotion, which can be utilized to verify the MAM phenotype. PMID:23559309

  11. A Generalized Information Theoretical Model for Quantum Secret Sharing

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Xu, Ting-Ting; Li, Yong-Ming

    2016-11-01

    An information theoretical model for quantum secret sharing was introduced by H. Imai et al. (Quantum Inf. Comput. 5(1), 69-80 2005), which was analyzed by quantum information theory. In this paper, we analyze this information theoretical model using the properties of the quantum access structure. By the analysis we propose a generalized model definition for the quantum secret sharing schemes. In our model, there are more quantum access structures which can be realized by our generalized quantum secret sharing schemes than those of the previous one. In addition, we also analyse two kinds of important quantum access structures to illustrate the existence and rationality for the generalized quantum secret sharing schemes and consider the security of the scheme by simple examples.

  12. Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech

    2016-12-01

    In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.

  13. A mathematical model for lactate transport to red blood cells.

    PubMed

    Wahl, Patrick; Yue, Zengyuan; Zinner, Christoph; Bloch, Wilhelm; Mester, Joachim

    2011-03-01

    A simple mathematical model for the transport of lactate from plasma to red blood cells (RBCs) during and after exercise is proposed based on our experimental studies for the lactate concentrations in RBCs and in plasma. In addition to the influx associated with the plasma-to-RBC lactate concentration gradient, it is argued that an efflux must exist. The efflux rate is assumed to be proportional to the lactate concentration in RBCs. This simple model is justified by the comparison between the model-predicted results and observations: For all 33 cases (11 subjects and 3 different warm-up conditions), the model-predicted time courses of lactate concentrations in RBC are generally in good agreement with observations, and the model-predicted ratios between lactate concentrations in RBCs and in plasma at the peak of lactate concentration in RBCs are very close to the observed values. Two constants, the influx rate coefficient C (1) and the efflux rate coefficient C (2), are involved in the present model. They are determined by the best fit to observations. Although the exact electro-chemical mechanism for the efflux remains to be figured out in the future research, the good agreement of the present model with observations suggests that the efflux must get stronger as the lactate concentration in RBCs increases. The physiological meanings of C (1) and C (2) as well as their potential applications are discussed.

  14. Simple broadband implementation of a phase contrast wavefront sensor for adaptive optics

    NASA Technical Reports Server (NTRS)

    Bloemhof, E. E.; Wallace, J. K.

    2004-01-01

    The most critical element of an adaptive optics system is its wavefront sensor, which must measure the closed-loop difference between the corrected wavefront and an ideal template at high speed, in real time, over a dense sampling of the pupil. Most high-order systems have used Shack-Hartmann wavefront sensors, but a novel approach based on Zernike's phase contrast principle appears promising. In this paper we discuss a simple way to achromatize such a phase contrast wavefront sensor, using the pi/2 phase difference between reflected and transmitted rays in a thin, symmetric beam splitter. We further model the response at a range of wavelengths to show that the required transverse dimension of the focal-plane phase-shifting spot, nominally lambda/D, may not be very sensitive to wavelength, and so in practice additional optics to introduce wavelength-dependent transverse magnification achromatizing this spot diameter may not be required. A very simple broadband implementation of the phase contrast wavefront sensor results.

  15. Suppression of Soot Formation and Shapes of Laminar Jet Diffusion Flames

    NASA Technical Reports Server (NTRS)

    Xu, F.; Dai, Z.; Faeth, G. M.

    2001-01-01

    Laminar nonpremixed (diffusion) flames are of interest because they provide model flame systems that are far more tractable for analysis and experiments than practical turbulent flames. In addition, many properties of laminar diffusion flames are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Finally, laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame shape predictions. Motivated by these observations, the shapes of round hydrocarbon-fueled laminar jet diffusion flames were considered, emphasizing conditions where effects of buoyancy are small because most practical flames are not buoyant. Earlier studies of shapes of hydrocarbon-fueled nonbuoyant laminar jet diffusion flames considered combustion in still air and have shown that flames at the laminar smoke point are roughly twice as long as corresponding soot-free (blue) flames and have developed simple ways to estimate their shapes. Corresponding studies of hydrocarbon-fueled weakly-buoyant laminar jet diffusion flames in coflowing air have also been reported. These studies were limited to soot-containing flames at laminar smoke point conditions and also developed simple ways to estimate their shapes but the behavior of corresponding soot-free flames has not been addressed. This is unfortunate because ways of selecting flame flow properties to reduce soot concentrations are of great interest; in addition, soot-free flames are fundamentally important because they are much more computationally tractable than corresponding soot-containing flames. Thus, the objectives of the present investigation were to observe the shapes of weakly-buoyant laminar jet diffusion flames at both soot-free and smoke point conditions and to use the results to evaluate simplified flame shape models. The present discussion is brief.

  16. Simple inflationary quintessential model. II. Power law potentials

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive improvement of the observational data.

  17. Growth morphologies of wax in the presence of kinetic inhibitors

    NASA Astrophysics Data System (ADS)

    Tetervak, Alexander A.

    Driven by the need to prevent crystallization of normal alkanes from diesel fuels in cold climates, the petroleum industry has developed additives to slow the growth of these crystals and alter their morphologies. Although the utility of these kinetic inhibitors has been well demonstrated in the field, few studies have directly monitored their effect at microscopic morphology, and the mechanisms by which they act remain poorly understood. Here we present a study of the effects of such additives on the crystallization of long-chain n-alkanes from solution. The additives change the growth morphology from plate-like crystals to a microcrystalline mesh. When we impose a front velocity by moving the sample through a temperature gradient, the mesh growth may form a macroscopic banded pattern and also exhibit a burst-crystallization behavior. In this study, we characterize these crystallization phenomena and also two growth models: a continuum model that demonstrates the essential behavior of the banded crystallization, and a simple qualitative cellular automata model that captures basics of the burst-crystallization process. Keywords: solidification; mesh crystallization; kinetic inhibitor; burst growth.

  18. C, N and P stoichiometric mismatch between resources and consumers influence the dynamics of a marine microbial food web model and its response to atmospheric N and P inputs

    NASA Astrophysics Data System (ADS)

    Pondaven, P.; Pivière, P.; Ridame, C.; Guien, C.

    2014-02-01

    Results from the DUNE experiments reported in this issue have shown that nutrient input from dust deposition in large mesocosms deployed in the western Mediterranean induced a response of the microbial food web, with an increase of primary production rates (PP), bacterial respiration rates (BR), as well as autotrophic and heterotrophic biomasses. Additionally, it was found that nutrient inputs strengthened the net heterotrophy of the system, with NPP : BR ratios < 1. In this study we used a simple microbial food web model, inspired from previous modelling studies, to explore how C, N and P stoichiometric mismatch between producers and consumers along the food chain can influence the dynamics and the trophic status of the ecosystem. Attention was paid to the mechanisms involved in the balance between net autotrophy vs. net heterotrophy. Although the model was kept simple, predicted changes in biomass and PP were qualitatively consistent with observations from DUNE experiments. Additionally, the model shed light on how ecological stoichiometric mismatch between producers and consumers can control food web dynamics and drive the system toward net heterotrophy. In the model, net heterotrophy was notably driven by the parameterisation of the production and excretion of extra DOC from phytoplankton under nutrient-limited conditions. This mechanism yielded to high C : P and C : N ratios of the DOM pool, and subsequent postabsorptive respiration of C by bacteria. The model also predicted that nutrient inputs from dust strengthened the net heterotrophy of the system; a pattern also observed during two of the three DUNE experiments (P and Q). However, the model was not able to account for the low NPP : BR ratios (down to 0.1) recorded during the DUNE experiments. Possible mechanisms involved in this discrepancy were discussed.

  19. Parallel constraint satisfaction in memory-based decisions.

    PubMed

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  20. Evaluating the contribution of genetics and familial shared environment to common disease using the UK Biobank.

    PubMed

    Muñoz, María; Pong-Wong, Ricardo; Canela-Xandri, Oriol; Rawlik, Konrad; Haley, Chris S; Tenesa, Albert

    2016-09-01

    Genome-wide association studies have detected many loci underlying susceptibility to disease, but most of the genetic factors that contribute to disease susceptibility remain unknown. Here we provide evidence that part of the 'missing heritability' can be explained by an overestimation of heritability. We estimated the heritability of 12 complex human diseases using family history of disease in 1,555,906 individuals of white ancestry from the UK Biobank. Estimates using simple family-based statistical models were inflated on average by ∼47% when compared with those from structural equation modeling (SEM), which specifically accounted for shared familial environmental factors. In addition, heritabilities estimated using SNP data explained an average of 44.2% of the simple family-based estimates across diseases and an average of 57.3% of the SEM-estimated heritabilities, accounting for almost all of the SEM heritability for hypertension. Our results show that both genetics and familial environment make substantial contributions to familial clustering of disease.

  1. PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.

    PubMed

    Klarner, Hannes; Streck, Adam; Siebert, Heike

    2017-03-01

    The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  2. Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...

    2016-06-23

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  3. Integrating individual movement behaviour into dispersal functions.

    PubMed

    Heinz, Simone K; Wissel, Christian; Conradt, Larissa; Frank, Karin

    2007-04-21

    Dispersal functions are an important tool for integrating dispersal into complex models of population and metapopulation dynamics. Most approaches in the literature are very simple, with the dispersal functions containing only one or two parameters which summarise all the effects of movement behaviour as for example different movement patterns or different perceptual abilities. The summarising nature of these parameters makes assessing the effect of one particular behavioural aspect difficult. We present a way of integrating movement behavioural parameters into a particular dispersal function in a simple way. Using a spatial individual-based simulation model for simulating different movement behaviours, we derive fitting functions for the functional relationship between the parameters of the dispersal function and several details of movement behaviour. This is done for three different movement patterns (loops, Archimedean spirals, random walk). Additionally, we provide measures which characterise the shape of the dispersal function and are interpretable in terms of landscape connectivity. This allows an ecological interpretation of the relationships found.

  4. Further shock tunnel studies of scramjet phenomena

    NASA Technical Reports Server (NTRS)

    Morgan, R. G.; Paull, A.; Morris, N. A.; Stalker, R. J.

    1986-01-01

    Scramjet phenomena were studied using the shock tunnel T3 at the Australian National University. Simple two dimensional models were used with a combination of wall and central injectors. Silane as an additive to hydrogen fuel was studied over a range of temperatures and pressures to evaluate its effect as an ignition aid. The film cooling effect of surface injected hydrogen was measured over a wide range of equivalence. Heat transfer measurements without injection were repeated to confirm previous indications of heating rates lower than simple flat plate predictions for laminar boundary layers in equilibrium flow. The previous results were reproduced and the discrepancies are discussed in terms of the model geometry and departures of the flow from equilibrium. In the thrust producing mode, attempts were made to increase specific impulse with wall injection. Some preliminary tests were also performed on shock induced ignition, to investigate the possibility in flight of injecting fuel upstream of the combustion chamber, where it could mix but not burn.

  5. Nonequilibrium Langevin dynamics: A demonstration study of shear flow fluctuations in a simple fluid

    NASA Astrophysics Data System (ADS)

    Belousov, Roman; Cohen, E. G. D.; Rondoni, Lamberto

    2017-08-01

    The present paper is based on a recent success of the second-order stochastic fluctuation theory in describing time autocorrelations of equilibrium and nonequilibrium physical systems. In particular, it was shown to yield values of the related deterministic parameters of the Langevin equation for a Couette flow in a microscopic molecular dynamics model of a simple fluid. In this paper we find all the remaining constants of the stochastic dynamics, which then is simulated numerically and compared directly with the original physical system. By using these data, we study in detail the accuracy and precision of a second-order Langevin model for nonequilibrium physical systems theoretically and computationally. We find an intriguing relation between an applied external force and cumulants of the resulting flow fluctuations. This is characterized by a linear dependence of an athermal cumulant ratio, an apposite quantity introduced here. In addition, we discuss how the order of a given Langevin dynamics can be raised systematically by introducing colored noise.

  6. Chemical control of the viscoelastic properties of vinylogous urethane vitrimers

    PubMed Central

    Denissen, Wim; Droesbeke, Martijn; Nicolaÿ, Renaud; Leibler, Ludwik; Winne, Johan M.; Du Prez, Filip E.

    2017-01-01

    Vinylogous urethane based vitrimers are polymer networks that have the intrinsic property to undergo network rearrangements, stress relaxation and viscoelastic flow, mediated by rapid addition/elimination reactions of free chain end amines. Here we show that the covalent exchange kinetics significantly can be influenced by combination with various simple additives. As anticipated, the exchange reactions on network level can be further accelerated using either Brønsted or Lewis acid additives. Remarkably, however, a strong inhibitory effect is observed when a base is added to the polymer matrix. These effects have been mechanistically rationalized, guided by low-molecular weight kinetic model experiments. Thus, vitrimer elastomer materials can be rationally designed to display a wide range of viscoelastic properties. PMID:28317893

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindsey, Nicholas C.

    The growth of additive manufacturing as a disruptive technology poses nuclear proliferation concerns worthy of serious consideration. Additive manufacturing began in the early 1980s with technological advances in polymer manipulation, computer capabilities, and computer-aided design (CAD) modeling. It was originally limited to rapid prototyping; however, it eventually developed into a complete means of production that has slowly penetrated the consumer market. Today, additive manufacturing machines can produce complex and unique items in a vast array of materials including plastics, metals, and ceramics. These capabilities have democratized the manufacturing industry, allowing almost anyone to produce items as simple as cup holdersmore » or as complex as jet fuel nozzles. Additive manufacturing, or three-dimensional (3D) printing as it is commonly called, relies on CAD files created or shared by individuals with additive manufacturing machines to produce a 3D object from a digital model. This sharing of files means that a 3D object can be scanned or rendered as a CAD model in one country, and then downloaded and printed in another country, allowing items to be shared globally without physically crossing borders. The sharing of CAD files online has been a challenging task for the export controls regime to manage over the years, and additive manufacturing could make these transfers more common. In this sense, additive manufacturing is a disruptive technology not only within the manufacturing industry but also within the nuclear nonproliferation world. This paper provides an overview of additive manufacturing concerns of proliferation.« less

  8. Simultaneous measurement for thermal conductivity, diffusivity, and specific heat of methane hydrate bearing sediments recovered from Nankai-Trough wells

    NASA Astrophysics Data System (ADS)

    Muraoka, M.; Ohtake, M.; Susuki, N.; Yamamoto, Y.; Suzuki, K.; Tsuji, T.

    2014-12-01

    This study presents the results of the measurements of the thermal constants of natural methane-hydrate-bearing sediments samples recovered from the Tokai-oki test wells (Nankai-Trough, Japan) in 2004. The thermal conductivity, thermal diffusivity, and specific heat of the samples were simultaneously determined using the hot-disk transient method. The thermal conductivity of natural hydrate-bearing sediments decreases slightly with increasing porosity. In addition, the thermal diffusivity of hydrate-bearing sediment decrease as porosity increases. We also used simple models to calculate the thermal conductivity and thermal diffusivity. The results of the distribution model (geometric-mean model) are relatively consistent with the measurement results. In addition, the measurement results are consistent with the thermal diffusivity, which is estimated by dividing the thermal conductivity obtained from the distribution model by the specific heat obtained from the arithmetic mean. In addition, we discuss the relation between the thermal conductivity and mineral composition of core samples in conference. Acknowledgments. This work was financially supported by MH21 Research Consortium for Methane Hydrate Resources in Japan on the National Methane Hydrate Exploitation Program planned by the Ministry of Economy, Trade and Industry.

  9. Application of a dual unscented Kalman filter for simultaneous state and parameter estimation in problems of surface-atmosphere exchange

    Treesearch

    J.H. Gove; D.Y. Hollinger; D.Y. Hollinger

    2006-01-01

    A dual unscented Kalman filter (UKF) was used to assimilate net CO2 exchange (NEE) data measured over a spruce-hemlock forest at the Howland AmeriFlux site in Maine, USA, into a simple physiological model for the purpose of filling gaps in an eddy flux time series. In addition to filling gaps in the measurement record, the UKF approach provides continuous estimates of...

  10. Coherent direct sequence optical code multiple access encoding-decoding efficiency versus wavelength detuning.

    PubMed

    Pastor, D; Amaya, W; García-Olcina, R; Sales, S

    2007-07-01

    We present a simple theoretical model of and the experimental verification for vanishing of the autocorrelation peak due to wavelength detuning on the coding-decoding process of coherent direct sequence optical code multiple access systems based on a superstructured fiber Bragg grating. Moreover, the detuning vanishing effect has been explored to take advantage of this effect and to provide an additional degree of multiplexing and/or optical code tuning.

  11. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  12. Rocket exhaust ground cloud/atmospheric interactions

    NASA Technical Reports Server (NTRS)

    Hwang, B.; Gould, R. K.

    1978-01-01

    An attempt to identify and minimize the uncertainties and potential inaccuracies of the NASA Multilayer Diffusion Model (MDM) is performed using data from selected Titan 3 launches. The study is based on detailed parametric calculations using the MDM code and a comparative study of several other diffusion models, the NASA measurements, and the MDM. The results are discussed and evaluated. In addition, the physical/chemical processes taking place during the rocket cloud rise are analyzed. The exhaust properties and the deluge water effects are evaluated. A time-dependent model for two aerosol coagulations is developed and documented. Calculations using this model for dry deposition during cloud rise are made. A simple model for calculating physical properties such as temperature and air mass entrainment during cloud rise is also developed and incorporated with the aerosol model.

  13. Analysis and control of the METC fluid-bed gasifier. Quarterly report, October 1994--January 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farell, A.E.; Reddy, S.

    1995-03-01

    This document summarizes work performed for the period 10/1/94 to 2/1/95. The initial phase of the work focuses on developing a simple transfer function model of the Fluidized Bed Gasifier (FBG). This transfer function model will be developed based purely on the gasifier responses to step changes in gasifier inputs (including reactor air, convey air, cone nitrogen, FBG pressure, and coal feedrate). This transfer function model will represent a linear, dynamic model that is valid near the operating point at which the data was taken. In addition, a similar transfer function model will be developed using MGAS in order tomore » assess MGAS for use as a model of the FBG for control systems analysis.« less

  14. Solvent Reaction Field Potential inside an Uncharged Globular Protein: A Bridge between Implicit and Explicit Solvent Models?

    PubMed Central

    Baker, Nathan A.; McCammon, J. Andrew

    2008-01-01

    The solvent reaction field potential of an uncharged protein immersed in Simple Point Charge/Extended (SPC/E) explicit solvent was computed over a series of molecular dynamics trajectories, intotal 1560 ns of simulation time. A finite, positive potential of 13 to 24 kbTec−1 (where T = 300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0 Å from the solute surface, on average 0.008 ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit-solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99. PMID:17949217

  15. The Phyre2 web portal for protein modelling, prediction and analysis

    PubMed Central

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE

    2017-01-01

    Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237

  16. A powerful and flexible approach to the analysis of RNA sequence count data.

    PubMed

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A

    2011-10-01

    A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.

  17. Solvent reaction field potential inside an uncharged globular protein: A bridge between implicit and explicit solvent models?

    NASA Astrophysics Data System (ADS)

    Cerutti, David S.; Baker, Nathan A.; McCammon, J. Andrew

    2007-10-01

    The solvent reaction field potential of an uncharged protein immersed in simple point charge/extended explicit solvent was computed over a series of molecular dynamics trajectories, in total 1560ns of simulation time. A finite, positive potential of 13-24 kbTec-1 (where T =300K), dependent on the geometry of the solvent-accessible surface, was observed inside the biomolecule. The primary contribution to this potential arose from a layer of positive charge density 1.0Å from the solute surface, on average 0.008ec/Å3, which we found to be the product of a highly ordered first solvation shell. Significant second solvation shell effects, including additional layers of charge density and a slight decrease in the short-range solvent-solvent interaction strength, were also observed. The impact of these findings on implicit solvent models was assessed by running similar explicit solvent simulations on the fully charged protein system. When the energy due to the solvent reaction field in the uncharged system is accounted for, correlation between per-atom electrostatic energies for the explicit solvent model and a simple implicit (Poisson) calculation is 0.97, and correlation between per-atom energies for the explicit solvent model and a previously published, optimized Poisson model is 0.99.

  18. Monte Carlo simulations of lattice models for single polymer systems

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-10-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.

  19. Size Effect of Ground Patterns on FM-Band Cross-Talks between Two Parallel Signal Traces of Printed Circuit Boards for Vehicles

    NASA Astrophysics Data System (ADS)

    Iida, Michihira; Maeno, Tsuyoshi; Wang, Jianqing; Fujiwara, Osamu

    Electromagnetic disturbances in vehicle-mounted radios are mainly caused by conducted noise currents flowing through wiring-harnesses from vehicle-mounted printed circuit boards (PCBs) with common slitting ground patterns. To suppress these kinds of noise currents, we previously measured them for simple two-layer PCBs with two parallel signal traces and slitting or non-slitting ground patterns, and then investigated by the FDTD simulation the reduction characteristics of the FM-band cross-talk noise levels between two parallel signal traces on six simple PCB models having different slitting ground or different divided ground patterns parallel to the traces. As a result, we found that the contributory factor for the FM-band cross-talk reduction is the reduction of mutual inductance between the two parallel traces, and also the noise currents from PCBs can rather be suppressed even if the size of the return ground becomes small. In this study, to investigate this finding, we further simulated the frequency characteristics of cross-talk reduction for additional six simple PCB models with different dividing dimensions ground patterns parallel to the traces, which revealed an interesting phenomenon that cross-talk reduction characteristics do not always decrease with increasing the width between the divided ground patterns.

  20. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Modeling syngas-fired gas turbine engines with two dilutants

    NASA Astrophysics Data System (ADS)

    Hawk, Mitchell E.

    2011-12-01

    Prior gas turbine engine modeling work at the University of Wyoming studied cycle performance and turbine design with air and CO2-diluted GTE cycles fired with methane and syngas fuels. Two of the cycles examined were unconventional and innovative. The work presented herein reexamines prior results and expands the modeling by including the impacts of turbine cooling and CO2 sequestration on GTE cycle performance. The simple, conventional regeneration and two alternative regeneration cycle configurations were examined. In contrast to air dilution, CO2 -diluted cycle efficiencies increased by approximately 1.0 percentage point for the three regeneration configurations examined, while the efficiency of the CO2-diluted simple cycle decreased by approximately 5.0 percentage points. For CO2-diluted cycles with a closed-exhaust recycling path, an optimum CO2-recycle pressure was determined for each configuration that was significantly lower than atmospheric pressure. Un-cooled alternative regeneration configurations with CO2 recycling achieved efficiencies near 50%, which was approximately 3.0 percentage points higher than the conventional regeneration cycle and simple cycle configurations that utilized CO2 recycling. Accounting for cooling of the first two turbine stages resulted in a 2--3 percentage point reduction in un-cooled efficiency, with air dilution corresponding to the upper extreme. Additionally, when the work required to sequester CO2 was accounted for, cooled cycle efficiency decreased by 4--6 percentage points, and was more negatively impacted when syngas fuels were used. Finally, turbine design models showed that turbine blades are shorter with CO2 dilution, resulting in fewer design restrictions.

  2. Difference in the Dissolution Behaviors of Tablets Containing Polyvinylpolypyrrolidone (PVPP) Depending on Pharmaceutical Formulation After Storage Under High Temperature and Humid Conditions.

    PubMed

    Takekuma, Yoh; Ishizaka, Haruka; Sumi, Masato; Sato, Yuki; Sugawara, Mitsuru

    Storage under high temperature and humid conditions has been reported to decrease the dissolution rate for some kinds of tablets containing polyvinylpolypyrrolidone (PVPP) as a disintegrant. The aim of this study was to elucidate the properties of pharmaceutical formulations with PVPP that cause a decrease in the dissolution rate after storage under high temperature and humid conditions by using model tablets with a simple composition. Model tablets, which consisted of rosuvastatin calcium or 5 simple structure compounds, salicylic acid, 2-aminodiphenylmethane, 2-aminobiphenyl, 2-(p-tolyl)benzoic acid or 4.4'-biphenol as principal agents, cellulose, lactose hydrate, PVPP and magnesium stearate as additives, were made by direct compression. The model tables were wrapped in paraffin papers and stored for 2 weeks at 40°C/75% relative humidity (RH). Dissolution tests were carried out by the paddle method in the Japanese Pharmacopoeia 16th edition. Model tablets with a simple composition were able to reproduce a decreased dissolution rate after storage at 40°C/75% RH. These tablets showed significantly decreased water absorption activities after storage. In the case of tablets without lactose hydrate by replacing with cellulose, a decreased dissolution rate was not observed. Carboxyl and amino groups in the structure of the principal agent were not directly involved in the decreased dissolution. 2-Benzylaniline tablets showed a remarkably decreased dissolution rate and 2-aminobiphenyl and 2-(p-tolyl)benzoic acid tablets showed slightly decreased dissolution rates, though 4,4'-biphenol tablets did not show a decrease dissolution rate. We demonstrated that additives and structure of the principal agent were involved in the decreased in dissolution rate for tablets with PVPP. The results suggested that one of the reasons for a decreased dissolution rate was the inclusion of lactose hydrate in tablets. The results also indicated that compounds as principal agents with low affinity for PVPP may be easily affected by airborne water under high temperature and humid conditions. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.

  3. Spectral Simulations and Abundance Determinations in the Interstellar Medium of Active Galaxies

    NASA Astrophysics Data System (ADS)

    Ferguson, Jason W.

    The narrow emission line spectra of gas illuminated by the nuclear region of active galaxies cannot be described by models involving simple photoionization calculations. In this project we develop the numerical tools necessary to accurately simulate observed spectra from such regions. We begin by developing a compact model hydrogen atom, and show that a moderate number of atomic levels can reproduce the emission of much larger, definitive calculations. We discuss the excitation mechanism of the gas, that is, whether the emission we see is a result of either local shock excitation or direct photoionization by the central source. We show that photoionization plus continuum fluorescence can mimic excitation by shocks, and we suggest an observational test to distinguish between photoionization due to shocks and the central source. We extend to the narrow line region of active galaxies the 'locally optimally-emitting cloud' (LOC) model, wherein the observed spectra are predominantly determined by a simple, yet powerful selection effect. Namely, nature provides the emitting line region with clouds of a vast ensemble of properties, and we observe emission lines from those clouds that are most efficient at emitting them. We have calculated large grids of photoionization models of narrow line clouds for a wide range of gas density and distances from the ionizing source. We show that when coupled to a simple Keplerian velocity field, the LOC naturally reproduces the line width - critical density correlation observed in many narrow line objects. In addition, we calculate classical diagnostic line ratios and use simple LOC integrations over gas density to simulate the radial emission of the narrow lines and compare with observations. The effects of including dust in the simulations is discussed and we show that the more neutral gas is likely to be dusty, while the more highly ionized gas is dust-free. This implies a variety of cloud origins.

  4. Analysis of Nitrogen Cycling in a Forest Stream During Autumn Using a 15N Tracer Addition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tank, J.L.

    2000-01-01

    We added {sup 15}NH{sub 4}Cl over 6 weeks to Upper Ball Creek, a second-order deciduous forest stream in the Appalachian Mountains, to follow the uptake, spiraling, and fate of nitrogen in a stream food web during autumn. A priori predictions of N flow and retention were made using a simple food web mass balance model. Values of d{sup 15}N were determined for stream water ammonium, nitrate, dissolved organic nitrogen, and various compartments of the food web over time and distance and then compared to model predictions.

  5. Ground and Range Operations for a Heavy-Lift Vehicle: Preliminary Thoughts

    NASA Technical Reports Server (NTRS)

    Rabelo, Luis; Zhu, Yanshen; Compton, Jeppie; Bardina, Jorge

    2011-01-01

    This paper discusses the ground and range operations for a Shuttle derived Heavy-Lift Vehicle being launched from the Kennedy Space Center on the Eastern range. Comparisons will be made between the Shuttle and a heavy lift configuration (SLS-ETF MPCV April 2011) by contrasting their subsystems. The analysis will also describe a simulation configuration with the potential to be utilized for heavy lift vehicle processing/range simulation modeling and the development of decision-making systems utilized by the range. In addition, a simple simulation model is used to provide the required critical thinking foundations for this preliminary analysis.

  6. Mathematical, numerical and experimental analysis of the swirling flow at a Kaplan runner outlet

    NASA Astrophysics Data System (ADS)

    Muntean, S.; Ciocan, T.; Susan-Resiga, R. F.; Cervantes, M.; Nilsson, H.

    2012-11-01

    The paper presents a novel mathematical model for a-priori computation of the swirling flow at Kaplan runners outlet. The model is an extension of the initial version developed by Susan-Resiga et al [1], to include the contributions of non-negligible radial velocity and of the variable rothalpy. Simple analytical expressions are derived for these additional data from three-dimensional numerical simulations of the Kaplan turbine. The final results, i.e. velocity components profiles, are validated against experimental data at two operating points, with the same Kaplan runner blades opening, but variable discharge.

  7. Interpretation of OAO-2 ultraviolet light curves of beta Doradus

    NASA Technical Reports Server (NTRS)

    Hutchinson, J. L.; Lillie, C. F.; Hill, S. J.

    1975-01-01

    Middle-ultraviolet light curves of beta Doradus, obtained by OAO-2, are presented along with other evidence indicating that the small additional bumps observed on the rising branches of these curves have their origin in shock-wave phenomena in the upper atmosphere of this classical Cepheid. A simple piston-driven spherical hydrodynamic model of the atmosphere is developed to explain the bumps, and the calculations are compared with observations. The model is found to be consistent with the shapes of the light curves as well as with measurements of the H-alpha radial velocities.

  8. Flow Past a Descending Balloon

    NASA Technical Reports Server (NTRS)

    Baginski, Frank

    2001-01-01

    In this report, we present our findings related to aerodynamic loading of partially inflated balloon shapes. This report will consider aerodynamic loading of partially inflated inextensible natural shape balloons and some relevant problems in potential flow. For the axisymmetric modeling, we modified our Balloon Design Shape Program (BDSP) to handle axisymmetric inextensible ascent shapes with aerodynamic loading. For a few simple examples of two dimensional potential flows, we used the Matlab PDE Toolbox. In addition, we propose a model for aerodynamic loading of strained energy minimizing balloon shapes with lobes. Numerical solutions are presented for partially inflated strained balloon shapes with lobes and no aerodynamic loading.

  9. Experiment-specific cosmic microwave background calculations made easier - Approximation formula for smoothed delta T/T windows

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.

    1993-01-01

    Simple and easy to implement elementary function approximations are introduced to the spectral window functions needed in calculations of model predictions of the cosmic microwave backgrond (CMB) anisotropy. These approximations allow the investigator to obtain model delta T/T predictions in terms of single integrals over the power spectrum of cosmological perturbations and to avoid the necessity of performing the additional integrations. The high accuracy of these approximations is demonstrated here for the CDM theory-based calculations of the expected delta T/T signal in several experiments searching for the CMB anisotropy.

  10. Analysis and numerical simulation of a laboratory analog of radiatively induced cloud-top entrainment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerstein, Alan R.; Sayler, Bentley J.; Wunsch, Scott Edward

    2010-11-01

    Numerical simulations using the One-Dimensional-Turbulence model are compared to water-tank measurements [B. J. Sayler and R. E. Breidenthal, J. Geophys. Res. 103 (D8), 8827 (1998)] emulating convection and entrainment in stratiform clouds driven by cloud-top cooling. Measured dependences of the entrainment rate on Richardson number, molecular transport coefficients, and other experimental parameters are reproduced. Additional parameter variations suggest more complicated dependences of the entrainment rate than previously anticipated. A simple algebraic model indicates the ways in which laboratory and cloud entrainment behaviors might be similar and different.

  11. Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri

    2012-01-01

    An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.

  12. Suggestion of a Numerical Model for the Blood Glucose Adjustment with Ingesting a Food

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naokatsu; Takai, Hiroshi

    In this study, we present a numerical model of the time dependence of blood glucose value after ingesting a meal. Two numerical models are proposed in this paper to explain a digestion mechanism and an adjustment mechanism of blood glucose in the body, respectively. It is considered that models are exhibited by using simple equations with a transfer function and a block diagram. Additionally, the time dependence of blood glucose was measured, when subjects ingested a sucrose or a starch. As a result, it is clear that the calculated result of models using a computer can be fitted very well to the measured result of the time dependence of blood glucose. Therefore, it is considered that the digestion model and the adjustment model are useful models in order to estimate a blood glucose value after ingesting meals.

  13. A twin study of cardiac reactivity and its relationship to parental blood pressure.

    PubMed

    Carroll, D; Hewitt, J K; Last, K A; Turner, J R; Sims, J

    1985-01-01

    The cardiac reactivity of 40 monozygotic and 40 dizygotic pairs of young male twins was monitored during psychological challenge, as afforded by a video game. The observed pattern of variation could not be accounted for solely by environmental factors. In fact, a simple genetic model that implicated additive genetic effects, along with those stemming from individual environments, best fitted the data. In addition, cardiac reactions were substantially greater for subjects whose parents both had relatively elevated blood pressure. Overall, these data suggest individual differences in cardiac reactivity have a heritable component, and that high reactivity may be a precursor of elevated blood pressure.

  14. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  15. Climate Change Impacts on the Cryosphere of Mountain Regions: Validation of a Novel Model Using the Alaska Range

    NASA Astrophysics Data System (ADS)

    Mosier, T. M.; Hill, D. F.; Sharp, K. V.

    2015-12-01

    Mountain regions are natural water towers, storing water seasonally as snowpack and for much longer as glaciers. Understanding the response of these systems to climate change is necessary in order to make informed decisions about prevention or mitigation measures. Yet, mountain regions are often data sparse, leading many researchers to implement simple or enhanced temperature index (ETI) models to simulate cryosphere processes. These model structures do not account for the thermal inertia of snowpack and glaciers and do not robustly capture differences in system response to climate regimes that differ from those the model was calibrated for. For instance, a temperature index calibration parameter will differ substantially in cold-dry conditions versus warm-wet ones. To overcome these issues, we have developed a cryosphere hydrology model, called the Significantly Enhanced Temperature Index (SETI), which uses an energy balance structure but parameterizes energy balance components in terms of minimum, maximum and mean temperature, precipitation, and geometric inputs using established relationships. Additionally, the SETI model includes a glacier sliding model and can therefore be used to estimate long-term glacier response to climate change. Sensitivity of the SETI model to changing climate is compared with an ETI and a simple temperature index model for several partially-glaciated watersheds within Alaska, including Wolverine glacier where multi-decadal glacier stake measurements are available, to highlight the additional fidelity attributed to the increased complexity of the SETI structure. The SETI model is then applied to the entire Alaska Range region for an ensemble of global climate models (GCMs), using representative concentration pathways 4.5 and 8.5. Comparing model runs based on ensembles of GCM projections to historic conditions, total annual snowfall within the Alaska region is not expected to change appreciably, but the spatial distribution of snow shifts towards higher elevations and for a large portion of the region the duration of snow cover decreases. The changes in temperature and snow distribution also lead to spatially heterogeneous responses by glaciers within the region. The SETI model is designed to be easy to apply for any mountain region where cryospheric processes dominate.

  16. A Simple and Low-Cost Procedure for Growing Geobacter sulfurreducens Cell Cultures and Biofilms in Bioelectrochemical Systems

    PubMed Central

    O’Brien, J. Patrick; Malvankar, Nikhil S.

    2017-01-01

    Anaerobic microorganisms play a central role in several environmental processes and regulate global biogeochemical cycling of nutrients and minerals. Many anaerobic microorganisms are important for the production of bioenergy and biofuels. However, the major hurdle in studying anaerobic microorganisms in the laboratory is the requirement for sophisticated and expensive gassing stations and glove boxes to create and maintain the anaerobic environment. This appendix presents a simple design for a gassing station that can be used readily by an inexperienced investigator for cultivation of anaerobic microorganisms. In addition, this appendix also details the low-cost assembly of bioelectrochemical systems and outlines a simplified procedure for cultivating and analyzing bacterial cell cultures and biofilms that produce electric current, using Geobacter sulfurreducens as a model organism. PMID:27858972

  17. Lunar exploration for resource utilization

    NASA Technical Reports Server (NTRS)

    Duke, Michael B.

    1992-01-01

    The strategy for developing resources on the Moon depends on the stage of space industrialization. A case is made for first developing the resources needed to provide simple materials required in large quantities for space operations. Propellants, shielding, and structural materials fall into this category. As the enterprise grows, it will be feasible to develop additional sources - those more difficult to obtain or required in smaller quantities. Thus, the first materials processing on the Moon will probably take the abundant lunar regolith, extract from it major mineral or glass species, and do relatively simple chemical processing. We need to conduct a lunar remote sensing mission to determine the global distribution of features, geophysical properties, and composition of the Moon, information which will serve as the basis for detailed models of and engineering decisions about a lunar mine.

  18. Application of digital control techniques for satellite medium power DC-DC converters

    NASA Astrophysics Data System (ADS)

    Skup, Konrad R.; Grudzinski, Pawel; Nowosielski, Witold; Orleanski, Piotr; Wawrzaszek, Roman

    2010-09-01

    The objective of this paper is to present a work concerning a digital control loop system for satellite medium power DC-DC converters that is done in Space Research Centre. The whole control process of a described power converter bases on a high speed digital signal processing. The paper presents a development of a FPGA digital controller for voltage mode stabilization that was implemented using VHDL. The described controllers are a classical digital PID controller and a bang-bang controller. The used converter for testing is a simple model of 5-20 W, 200 kHz buck power converter. A high resolution digital PWM approach is presented. Additionally a simple and effective solution of filtering of an analog-to-digital converter output is presented.

  19. Why a fly? Using Drosophila to understand the genetics of circadian rhythms and sleep.

    PubMed

    Hendricks, Joan C; Sehgal, Amita

    2004-03-15

    Among simple model systems, Drosophila has specific advantages for neurobehavioral investigations. It has been particularly useful for understanding the molecular basis of circadian rhythms. In addition, the genetics of fruit-fly sleep are beginning to develop. This review summarizes the current state of understanding of circadian rhythms and sleep in the fruit fly for the readers of Sleep. We note where information is available in mammals, for comparison with findings in fruit flies, to provide an evolutionary perspective, and we focus on recent findings and new questions. We propose that sleep-specific neural activity may alter cellular function and thus accomplish the restorative function or functions of sleep. In conclusion, we sound some cautionary notes about some of the complexities of working with this "simple" organism.

  20. Expected for acquisition movement exercise is more effective for functional recovery than simple exercise in a rat model of hemiplegia.

    PubMed

    Ikeda, Satoshi; Ohwatashi, Akihiko; Harada, Katsuhiro; Kamikawa, Yurie; Yoshida, Akira

    2013-01-01

    The use of novel rehabilitative approaches for effecting functional recovery following stroke is controversial. Effects of different but effective rehabilitative interventions in the hemiplegic patient are not clear. We studied the effects of different rehabilitative approaches on functional recovery in the rat photochecmical cerebral infarction model. Twenty-four male Wistar rats aged 8 weeks were used. The cranial bone was exposed under deep anesthesia. Rose bengal (20 mg/kg) was injected intravenously, and the sensorimotor area of the cerebral cortex was irradiated transcranially for 20 min with a light beam of 533-nm wavelength. Animals were divided into 3 groups. In the simple-exercise group, treadmill exercise was performed for 20 min every day. In the expected for acquisition movement-training group, beam-walking exercise was done for 20 min daily. The control group was left to recover without additional intervention. Hindlimb function was evaluated with the beam-walking test. Following cerebral infarction, dysfunction of the contralateral extremities was observed. Functional recovery was observed earlier in the expected for acquisition training group than in the other groups. Although rats in the treadmill group recovered more quickly than controls, the beam-walking group had the shortest overall recovery time. Exercise facilitated functional recovery in the rat hemiplegic model, and expected for acquisition exercise was more effective than simple exercise. These findings are considered to have important implications for the future development of clinical rehabilitation programs.

  1. The viscosity of magmatic silicate liquids: A model for calculation

    NASA Technical Reports Server (NTRS)

    Bottinga, Y.; Weill, D. F.

    1971-01-01

    A simple model has been designed to allow reasonably accurate calculations of viscosity as a function of temperature and composition. The problem of predicting viscosities of anhydrous silicate liquids has been investigated since such viscosity numbers are applicable to many extrusive melts and to nearly dry magmatic liquids in general. The fluidizing action of water dissolved in silicate melts is well recognized and it is now possible to predict the effect of water content on viscosity in a semiquantitative way. Water was not incorporated directly into the model. Viscosities of anhydrous compositions were calculated, and, where necessary, the effect of added water and estimated. The model can be easily modified to incorporate the effect of water whenever sufficient additional data are accumulated.

  2. Spectrum simulation in DTSA-II.

    PubMed

    Ritchie, Nicholas W M

    2009-10-01

    Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.

  3. Simple Estimators for the Simple Latent Class Mastery Testing Model. Twente Educational Memorandum No. 19.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…

  4. Modeling of time dependent localized flow shear stress and its impact on cellular growth within additive manufactured titanium implants.

    PubMed

    Zhang, Ziyu; Yuan, Lang; Lee, Peter D; Jones, Eric; Jones, Julian R

    2014-11-01

    Bone augmentation implants are porous to allow cellular growth, bone formation and fixation. However, the design of the pores is currently based on simple empirical rules, such as minimum pore and interconnects sizes. We present a three-dimensional (3D) transient model of cellular growth based on the Navier-Stokes equations that simulates the body fluid flow and stimulation of bone precursor cellular growth, attachment, and proliferation as a function of local flow shear stress. The model's effectiveness is demonstrated for two additive manufactured (AM) titanium scaffold architectures. The results demonstrate that there is a complex interaction of flow rate and strut architecture, resulting in partially randomized structures having a preferential impact on stimulating cell migration in 3D porous structures for higher flow rates. This novel result demonstrates the potential new insights that can be gained via the modeling tool developed, and how the model can be used to perform what-if simulations to design AM structures to specific functional requirements. © 2014 Wiley Periodicals, Inc.

  5. Modeling Off-Nominal Behavior in SysML

    NASA Technical Reports Server (NTRS)

    Day, John; Donahue, Kenny; Ingham, Mitch; Kadesch, Alex; Kennedy, Kit; Post, Ethan

    2012-01-01

    Fault Management is an essential part of the system engineering process that is limited in its effectiveness by the ad hoc nature of the applied approaches and methods. Providing a rigorous way to develop and describe off-nominal behavior is a necessary step in the improvement of fault management, and as a result, will enable safe, reliable and available systems even as system complexity increases... The basic concepts described in this paper provide a foundation to build a larger set of necessary concepts and relationships for precise modeling of off-nominal behavior, and a basis for incorporating these ideas into the overall systems engineering process.. The simple FMEA example provided applies the modeling patterns we have developed and illustrates how the information in the model can be used to reason about the system and derive typical fault management artifacts.. A key insight from the FMEA work was the utility of defining failure modes as the "inverse of intent", and deriving this from the behavior models.. Additional work is planned to extend these ideas and capabilities to other types of relevant information and additional products.

  6. Additive effects in high-voltage layered-oxide cells: A statistics of mixtures approach

    DOE PAGES

    Sahore, Ritu; Peebles, Cameron; Abraham, Daniel P.; ...

    2017-07-20

    Li 1.03(Ni 0.5Mn 0.3Co 0.2) 0.97O 2 (NMC)-based coin cells containing the electrolyte additives vinylene carbonate (VC) and tris(trimethylsilyl)phosphite (TMSPi) in the range of 0-2 wt% were cycled between 3.0 and 4.4 V. The changes in capacity at rates of C/10 and C/1 and resistance at 60% state of charge were found to follow linear-with-time kinetic rate laws. Further, the C/10 capacity and resistance data were amenable to modeling by a statistics of mixtures approach. Applying physical meaning to the terms in the empirical models indicated that the interactions between the electrolyte and additives were not simple. For example, theremore » were strong, synergistic interactions between VC and TMSPi affecting C/10 capacity loss, as expected, but there were other, more subtle interactions between the electrolyte components. In conclusion, the interactions between these components controlled the C/10 capacity decline and resistance increase.« less

  7. Mutational landscape of EGFR-, MYC-, and Kras-driven genetically engineered mouse models of lung adenocarcinoma

    PubMed Central

    McFadden, David G.; Politi, Katerina; Bhutkar, Arjun; Chen, Frances K.; Song, Xiaoling; Pirun, Mono; Santiago, Philip M.; Kim-Kiselak, Caroline; Platt, James T.; Lee, Emily; Hodges, Emily; Rosebrock, Adam P.; Bronson, Roderick T.; Socci, Nicholas D.; Hannon, Gregory J.; Jacks, Tyler; Varmus, Harold

    2016-01-01

    Genetically engineered mouse models (GEMMs) of cancer are increasingly being used to assess putative driver mutations identified by large-scale sequencing of human cancer genomes. To accurately interpret experiments that introduce additional mutations, an understanding of the somatic genetic profile and evolution of GEMM tumors is necessary. Here, we performed whole-exome sequencing of tumors from three GEMMs of lung adenocarcinoma driven by mutant epidermal growth factor receptor (EGFR), mutant Kirsten rat sarcoma viral oncogene homolog (Kras), or overexpression of MYC proto-oncogene. Tumors from EGFR- and Kras-driven models exhibited, respectively, 0.02 and 0.07 nonsynonymous mutations per megabase, a dramatically lower average mutational frequency than observed in human lung adenocarcinomas. Tumors from models driven by strong cancer drivers (mutant EGFR and Kras) harbored few mutations in known cancer genes, whereas tumors driven by MYC, a weaker initiating oncogene in the murine lung, acquired recurrent clonal oncogenic Kras mutations. In addition, although EGFR- and Kras-driven models both exhibited recurrent whole-chromosome DNA copy number alterations, the specific chromosomes altered by gain or loss were different in each model. These data demonstrate that GEMM tumors exhibit relatively simple somatic genotypes compared with human cancers of a similar type, making these autochthonous model systems useful for additive engineering approaches to assess the potential of novel mutations on tumorigenesis, cancer progression, and drug sensitivity. PMID:27702896

  8. Mutational landscape of EGFR-, MYC-, and Kras-driven genetically engineered mouse models of lung adenocarcinoma.

    PubMed

    McFadden, David G; Politi, Katerina; Bhutkar, Arjun; Chen, Frances K; Song, Xiaoling; Pirun, Mono; Santiago, Philip M; Kim-Kiselak, Caroline; Platt, James T; Lee, Emily; Hodges, Emily; Rosebrock, Adam P; Bronson, Roderick T; Socci, Nicholas D; Hannon, Gregory J; Jacks, Tyler; Varmus, Harold

    2016-10-18

    Genetically engineered mouse models (GEMMs) of cancer are increasingly being used to assess putative driver mutations identified by large-scale sequencing of human cancer genomes. To accurately interpret experiments that introduce additional mutations, an understanding of the somatic genetic profile and evolution of GEMM tumors is necessary. Here, we performed whole-exome sequencing of tumors from three GEMMs of lung adenocarcinoma driven by mutant epidermal growth factor receptor (EGFR), mutant Kirsten rat sarcoma viral oncogene homolog (Kras), or overexpression of MYC proto-oncogene. Tumors from EGFR- and Kras-driven models exhibited, respectively, 0.02 and 0.07 nonsynonymous mutations per megabase, a dramatically lower average mutational frequency than observed in human lung adenocarcinomas. Tumors from models driven by strong cancer drivers (mutant EGFR and Kras) harbored few mutations in known cancer genes, whereas tumors driven by MYC, a weaker initiating oncogene in the murine lung, acquired recurrent clonal oncogenic Kras mutations. In addition, although EGFR- and Kras-driven models both exhibited recurrent whole-chromosome DNA copy number alterations, the specific chromosomes altered by gain or loss were different in each model. These data demonstrate that GEMM tumors exhibit relatively simple somatic genotypes compared with human cancers of a similar type, making these autochthonous model systems useful for additive engineering approaches to assess the potential of novel mutations on tumorigenesis, cancer progression, and drug sensitivity.

  9. The Effect of Ionospheric Models on Electromagnetic Pulse Locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.; Triplett, Laurie A.

    2014-07-01

    Locations of electromagnetic pulses (EMPs) determined by time-of-arrival (TOA) often have outliers with significantly larger errors than expected. In the past, these errors were thought to arise from high order terms in the Appleton-Hartree equation. We simulated 1000 events randomly spread around the Earth into a constellation of 22 GPS satellites. We used four different ionospheres: “simple” where the time delay goes as the inverse of the frequency-squared, “full Appleton-Hartree”, the “BobRD integrals” and a full raytracing code. The simple and full Appleton-Hartree ionospheres do not show outliers whereas the BobRD and raytracing do. This strongly suggests that the causemore » of the outliers is not additional terms in the Appleton-Hartree equation, but rather is due to the additional path length due to refraction. A method to fix the outliers is suggested based on fitting a time to the delays calculated at the 5 GPS frequencies with BobRD and simple ionospheres. The difference in time is used as a correction to the TOAs.« less

  10. Making sense of enthalpy of vaporization trends for ionic liquids: new experimental and simulation data show a simple linear relationship and help reconcile previous data.

    PubMed

    Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Yermalayeu, Andrei V; Schick, Christoph; Liu, Hongjun; Maginn, Edward J; Bulut, Safak; Krossing, Ingo; Kalb, Roland

    2013-05-30

    Vaporization enthalpy of an ionic liquid (IL) is a key physical property for applications of ILs as thermofluids and also is useful in developing liquid state theories and validating intermolecular potential functions used in molecular modeling of these liquids. Compilation of the data for a homologous series of 1-alkyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([C(n)mim][NTf2]) ILs has revealed an embarrassing disarray of literature results. New experimental data, based on the concurring results from quartz crystal microbalance, thermogravimetric analyses, and molecular dynamics simulation have revealed a clear linear dependence of IL vaporization enthalpies on the chain length of the alkyl group on the cation. Ambiguity of the procedure for extrapolation of vaporization enthalpies to the reference temperature 298 K was found to be a major source of the discrepancies among previous data sets. Two simple methods for temperature adjustment of vaporization enthalpies have been suggested. Resulting vaporization enthalpies obey group additivity, although the values of the additivity parameters for ILs are different from those for molecular compounds.

  11. National Freight Demand Modeling - Bridging the Gap between Freight Flow Statistics and U.S. Economic Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Shih-Miao; Hwang, Ho-Ling

    2007-01-01

    This paper describes a development of national freight demand models for 27 industry sectors covered by the 2002 Commodity Flow Survey. It postulates that the national freight demands are consistent with U.S. business patterns. Furthermore, the study hypothesizes that the flow of goods, which make up the national production processes of industries, is coherent with the information described in the 2002 Annual Input-Output Accounts developed by the Bureau of Economic Analysis. The model estimation framework hinges largely on the assumption that a relatively simple relationship exists between freight production/consumption and business patterns for each industry defined by the three-digit Northmore » American Industry Classification System industry codes (NAICS). The national freight demand model for each selected industry sector consists of two models; a freight generation model and a freight attraction model. Thus, a total of 54 simple regression models were estimated under this study. Preliminary results indicated promising freight generation and freight attraction models. Among all models, only four of them had a R2 value lower than 0.70. With additional modeling efforts, these freight demand models could be enhanced to allow transportation analysts to assess regional economic impacts associated with temporary lost of transportation services on U.S. transportation network infrastructures. Using such freight demand models and available U.S. business forecasts, future national freight demands could be forecasted within certain degrees of accuracy. These freight demand models could also enable transportation analysts to further disaggregate the CFS state-level origin-destination tables to county or zip code level.« less

  12. Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

    NASA Astrophysics Data System (ADS)

    La Vigna, Francesco; Hill, Mary C.; Rossetto, Rudy; Mazza, Roberto

    2016-09-01

    With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20 % less pumped water, but would require installing newly positioned wells and cooperation between mine owners.

  13. A model for the influence of pressure on the bulk modulus and the influence of temperature on the solidification pressure for liquid lubricants

    NASA Technical Reports Server (NTRS)

    Jacobson, B. O.; Vinet, P.

    1986-01-01

    Two pressure chambers, for compression experiments with liquids from zero to 2.2 GPa pressure, are described. The experimentally measured compressions are then compared to theoretical values given by an isothermal model of equation of state recently introduced for solids. The model describes the pressure and bulk modulus as a function of compression for different types of lubricants with a very high accuracy up to the pressure limit of the high pressure chamber used (2.2 GPa). In addition the influence of temperature on static solidification pressure was found to be a simple function of the thermal expansion of the fluid.

  14. Current Status and Challenges of Atmospheric Data Assimilation

    NASA Astrophysics Data System (ADS)

    Atlas, R. M.; Gelaro, R.

    2016-12-01

    The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.

  15. Ionic micelles and aromatic additives: a closer look at the molecular packing parameter.

    PubMed

    Lutz-Bueno, Viviane; Isabettini, Stéphane; Walker, Franziska; Kuster, Simon; Liebi, Marianne; Fischer, Peter

    2017-08-16

    Wormlike micellar aggregates formed from the mixture of ionic surfactants with aromatic additives result in solutions with impressive viscoelastic properties. These properties are of high interest for numerous industrial applications and are often used as model systems for soft matter physics. However, robust and simple models for tailoring the viscoelastic response of the solution based on the molecular structure of the employed additive are required to fully exploit the potential of these systems. We address this shortcoming with a modified packing parameter based model, considering the additive-surfactant pair. The role of charge neutralization on anisotropic micellar growth was investigated with derivatives of sodium salicylate. The impact of the additives on the morphology of the micellar aggregates is explained from the molecular level to the macroscopic viscoelasticity. Changes in the micelle's volume, headgroup area and additive structure are explored to redefine the packing parameter. Uncharged additives penetrated deeper into the hydrophobic region of the micelle, whilst charged additives remained trapped in the polar region, as revealed by a combination of 1 H-NMR, SAXS and rheological measurements. A deeper penetration of the additives densified the hydrophobic core of the micelle and induced anisotropic growth by increasing the effective volume of the additive-surfactant pair. This phenomenon largely influenced the viscosity of the solutions. Partially penetrating additives reduced the electrostatic repulsions between surfactant headgroups and neighboring micelles. The resulting increased network density governed the elasticity of the solutions. Considering a packing parameter composed of the additive-surfactant pair proved to be a facile means of engineering the viscoelastic response of surfactant solutions. The self-assembly of the wormlike micellar aggregates could be tailored to desired morphologies resulting in a specific and predictable rheological response.

  16. Interacting with an artificial partner: modeling the role of emotional aspects.

    PubMed

    Cattinelli, Isabella; Goldwurm, Massimiliano; Borghese, N Alberto

    2008-12-01

    In this paper we introduce a simple model based on probabilistic finite state automata to describe an emotional interaction between a robot and a human user, or between simulated agents. Based on the agent's personality, attitude, and nature, and on the emotional inputs it receives, the model will determine the next emotional state displayed by the agent itself. The probabilistic and time-varying nature of the model yields rich and dynamic interactions, and an autonomous adaptation to the interlocutor. In addition, a reinforcement learning technique is applied to have one agent drive its partner's behavior toward desired states. The model may also be used as a tool for behavior analysis, by extracting high probability patterns of interaction and by resorting to the ergodic properties of Markov chains.

  17. Natural electroweak breaking from a mirror symmetry.

    PubMed

    Chacko, Z; Goh, Hock-Seng; Harnik, Roni

    2006-06-16

    We present "twin Higgs models," simple realizations of the Higgs boson as a pseudo Goldstone boson that protect the weak scale from radiative corrections up to scales of order 5-10 TeV. In the ultraviolet these theories have a discrete symmetry which interchanges each standard model particle with a corresponding particle which transforms under a twin or a mirror standard model gauge group. In addition, the Higgs sector respects an approximate global symmetry. When this global symmetry is broken, the discrete symmetry tightly constrains the form of corrections to the pseudo Goldstone Higgs potential, allowing natural electroweak symmetry breaking. Precision electroweak constraints are satisfied by construction. These models demonstrate that, contrary to the conventional wisdom, stabilizing the weak scale does not require new light particles charged under the standard model gauge groups.

  18. Autonomously Self-Adhesive Hydrogels as Building Blocks for Additive Manufacturing.

    PubMed

    Deng, Xudong; Attalla, Rana; Sadowski, Lukas P; Chen, Mengsu; Majcher, Michael J; Urosev, Ivan; Yin, Da-Chuan; Selvaganapathy, P Ravi; Filipe, Carlos D M; Hoare, Todd

    2018-01-08

    We report a simple method of preparing autonomous and rapid self-adhesive hydrogels and their use as building blocks for additive manufacturing of functional tissue scaffolds. Dynamic cross-linking between 2-aminophenylboronic acid-functionalized hyaluronic acid and poly(vinyl alcohol) yields hydrogels that recover their mechanical integrity within 1 min after cutting or shear under both neutral and acidic pH conditions. Incorporation of this hydrogel in an interpenetrating calcium-alginate network results in an interfacially stiffer but still rapidly self-adhesive hydrogel that can be assembled into hollow perfusion channels by simple contact additive manufacturing within minutes. Such channels withstand fluid perfusion while retaining their dimensions and support endothelial cell growth and proliferation, providing a simple and modular route to produce customized cell scaffolds.

  19. Simple stochastic order-book model of swarm behavior in continuous double auction

    NASA Astrophysics Data System (ADS)

    Ichiki, Shingo; Nishinari, Katsuhiro

    2015-02-01

    In this study, we present a simple stochastic order-book model for investors' swarm behaviors seen in the continuous double auction mechanism, which is employed by major global exchanges. Our study shows a characteristic called 'fat tail' seen in the data obtained from our model that incorporates the investors' swarm behaviors. Our model captures two swarm behaviors: one is investors' behavior to follow a trend in the historical price movement, and another is investors' behavior to send orders that contradict a trend in the historical price movement. In order to capture the features of influence by the swarm behaviors, from price data derived from our simulations using these models, we analyzed the price movement range, that is, how much the price is moved when it is continuously moved in a single direction. Depending on the type of swarm behavior, we saw a difference in the cumulative frequency distribution of this price movement range. In particular, for the model of investors who followed a trend in the historical price movement, we saw the power law in the tail of the cumulative frequency distribution of this price movement range. In addition, we analyzed the shape of the tail of the cumulative frequency distribution. The result demonstrated that one of the reasons the trend following of price occurs is that orders temporarily swarm on the order book in accordance with past price trends.

  20. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code.

    PubMed

    Möller, Jörgen; Davis, Sarah; Stevenson, Matt; Caro, J Jaime

    2017-10-01

    Modeling is an essential tool for health technology assessment, and various techniques for conceptualizing and implementing such models have been described. Recently, a new method has been proposed-the discretely integrated condition event or DICE simulation-that enables frequently employed approaches to be specified using a common, simple structure that can be entirely contained and executed within widely available spreadsheet software. To assess if a DICE simulation provides equivalent results to an existing discrete event simulation, a comparison was undertaken. A model of osteoporosis and its management programmed entirely in Visual Basic for Applications and made public by the National Institute for Health and Care Excellence (NICE) Decision Support Unit was downloaded and used to guide construction of its DICE version in Microsoft Excel ® . The DICE model was then run using the same inputs and settings, and the results were compared. The DICE version produced results that are nearly identical to the original ones, with differences that would not affect the decision direction of the incremental cost-effectiveness ratios (<1% discrepancy), despite the stochastic nature of the models. The main limitation of the simple DICE version is its slow execution speed. DICE simulation did not alter the results and, thus, should provide a valid way to design and implement decision-analytic models without requiring specialized software or custom programming. Additional efforts need to be made to speed up execution.

  1. Explanatory Models for Psychiatric Illness

    PubMed Central

    Kendler, Kenneth S.

    2009-01-01

    How can we best develop explanatory models for psychiatric disorders? Because causal factors have an impact on psychiatric illness both at micro levels and macro levels, both within and outside of the individual, and involving processes best understood from biological, psychological, and sociocultural perspectives, traditional models of science that strive for single broadly applicable explanatory laws are ill suited for our field. Such models are based on the incorrect assumption that psychiatric illnesses can be understood from a single perspective. A more appropriate scientific model for psychiatry emphasizes the understanding of mechanisms, an approach that fits naturally with a multicausal framework and provides a realistic paradigm for scientific progress, that is, understanding mechanisms through decomposition and reassembly. Simple subunits of complicated mechanisms can be usefully studied in isolation. Reassembling these constituent parts into a functioning whole, which is straightforward for simple additive mechanisms, will be far more challenging in psychiatry where causal networks contain multiple nonlinear interactions and causal loops. Our field has long struggled with the interrelationship between biological and psychological explanatory perspectives. Building from the seminal work of the neuronal modeler and philosopher David Marr, the author suggests that biology will implement but not replace psychology within our explanatory systems. The iterative process of interactions between biology and psychology needed to achieve this implementation will deepen our understanding of both classes of processes. PMID:18483135

  2. Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium

    NASA Astrophysics Data System (ADS)

    González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César

    2018-01-01

    This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.

  3. Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.

    PubMed

    Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M

    2014-06-01

    Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.

  4. Simple Tidal Prism Models Revisited

    NASA Astrophysics Data System (ADS)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  5. Direct G-code manipulation for 3D material weaving

    NASA Astrophysics Data System (ADS)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  6. Whole-body Motion Planning with Simple Dynamics and Full Kinematics

    DTIC Science & Technology

    2014-08-01

    optimizations can take an excessively long time to run, and may also suffer from local minima. Thus, this approach can become intractable for complex robots...motions like jumping and climbing. Additionally, the point-mass model suggests that the centroidal angular momentum is zero, which is not valid for motions...use in the DARPA Robotics Challenge. A. Jumping Our first example is to command the robot to jump off the ground, as illustrated in Fig.4. We assign

  7. Nonlinear dynamics in cardiac conduction

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Saxberg, B. E.; Cohen, R. J.

    1988-01-01

    Electrical conduction in the heart shows many phenomena familiar from nonlinear dynamics. Among these phenomena are multiple basins of attraction, phase locking, and perhaps period-doubling bifurcations and chaos. We describe a simple cellular-automation model of electrical conduction which simulates normal conduction patterns in the heart as well as a wide range of disturbances of heart rhythm. In addition, we review the application of percolation theory to the analysis of the development of complex, self-sustaining conduction patterns.

  8. Cohesion-decohesion asymmetry in geckos

    NASA Astrophysics Data System (ADS)

    Puglisi, G.; Truskinovsky, L.

    2013-03-01

    Lizards and insects can strongly attach to walls and then detach applying negligible additional forces. We propose a simple mechanical model of this phenomenon which implies active muscle control. We show that the detachment force may depend not only on the properties of the adhesive units, but also on the elastic interaction among these units. By regulating the scale of such cooperative interaction, the organism can actively switch between two modes of adhesion: delocalized (pull off) and localized (peeling).

  9. A LFER analysis of the singlet-triplet gap in a series of sixty-six carbenes

    NASA Astrophysics Data System (ADS)

    Alkorta, Ibon; Elguero, José

    2018-01-01

    Ab initio G4 calculations have been performed to investigate the singlet-triplet gap in a series of 66 simple carbenes. Energies and geometries were analyzed. An additive model has been explored that include four interaction terms. An abnormal behavior of the cyano group has been found. The 13C absolute shieldings of the carbenic carbon atom were calculated at the GIAO/B3LYP/6-311++G(d, p).

  10. Models of Government Blogging: Design Tradeoffs in Civic Engagement

    NASA Astrophysics Data System (ADS)

    Kavanaugh, Andrea; Kim, Hyung Nam; Pérez-Quiñones, Manuel; Isenhour, Philip

    Some local government officials and staff have been experimenting with emerging technologies as part of a broad suite of media used for informing and communicating with their constituencies. In addition to the typical government website and, for some, email exchange with citizens, some town and municipal governments are using blogs, video streaming, pod- casting, and Real Simple Syndication (RSS) to reach constituencies with updates and, in some cases, interaction and discussion between citizens and government.

  11. Interactive multimedia demonstrations for teaching fluid dynamics

    NASA Astrophysics Data System (ADS)

    Rowley, Clarence

    2008-11-01

    We present a number of multimedia tools, developed by undergraduates, for teaching concepts from introductory fluid mechanics. Short movies are presented, illustrating concepts such as hydrostatic pressure, the no-slip condition, boundary layers, and surface tension. In addition, we present a number of interactive demonstrations, which allow the user to interact with a simple model of a given concept via a web browser, and compare with experimental data. In collaboration with Mack Pasqual and Lindsey Brown, Princeton University.

  12. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  13. Hybrid semiconductor nanomagnetoelectronic devices

    NASA Astrophysics Data System (ADS)

    Bae, Jong Uk

    2007-12-01

    The subject of this dissertation is the exploration of a new class of hybrid semiconductor nanomagnetoelectronic devices. In these studies, single-domain nanomagnets are used as the gate in a transistor structure, and the spatially non-uniform magnetic fields that they generate provide an additional means to modulate the channel conductance. A quantum wire etched in a high-mobility GaAs/AlGaAs quantum well serves as the channel of this device and the current flow through it is modulated by a high-aspect-ratio Co nanomagnet. The conductance of this device exhibits clear hysteresis in a magnetic field, which is significantly enhanced when the nanomagnet is used as a gate to form a local tunnel barrier in the semiconductor channel. A simple theoretical model, which models the tunnel barrier as a simple harmonic saddle, is able to account for the experimentallyobserved behavior. Further improvements in the tunneling magneto-resistance of this device should be possible in the future by optimizing the gate and channel geometries. In addition to these investigations, we have also explored the hysteretic magnetoresistance of devices in which the tunnel barrier is absent and the behavior is instead dominated by the properties of the magnetic barrier alone. We show experimentally how quantum corrections to the conductance of the quantum wire compete against the magneto-transport effects induced by the non-uniform magnetic field.

  14. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  15. 20 CFR 725.608 - Interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... simple annual interest, computed from the date on which the benefits were due. The interest shall be... payment of retroactive benefits, the beneficiary shall also be entitled to simple annual interest on such... entitled to simple annual interest computed from the date upon which the beneficiary's right to additional...

  16. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    PubMed

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  17. Quantum-like dynamics applied to cognition: a consideration of available options

    NASA Astrophysics Data System (ADS)

    Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.

    2017-10-01

    Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  18. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  19. Disease-induced mortality in density-dependent discrete-time S-I-S epidemic models.

    PubMed

    Franke, John E; Yakubu, Abdul-Aziz

    2008-12-01

    The dynamics of simple discrete-time epidemic models without disease-induced mortality are typically characterized by global transcritical bifurcation. We prove that in corresponding models with disease-induced mortality a tiny number of infectious individuals can drive an otherwise persistent population to extinction. Our model with disease-induced mortality supports multiple attractors. In addition, we use a Ricker recruitment function in an SIS model and obtained a three component discrete Hopf (Neimark-Sacker) cycle attractor coexisting with a fixed point attractor. The basin boundaries of the coexisting attractors are fractal in nature, and the example exhibits sensitive dependence of the long-term disease dynamics on initial conditions. Furthermore, we show that in contrast to corresponding models without disease-induced mortality, the disease-free state dynamics do not drive the disease dynamics.

  20. Simulation and analysis of a model dinoflagellate predator-prey system

    NASA Astrophysics Data System (ADS)

    Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.

    2015-12-01

    This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.

  1. Theory of nematic order with aggregate dehydration for reversibly assembling proteins in concentrated solutions: Application to sickle-cell hemoglobin polymers

    NASA Astrophysics Data System (ADS)

    Hentschke, Reinhard; Herzfeld, Judith

    1991-06-01

    The reversible association of globular protein molecules in concentrated solution leads to highly polydisperse fibers, e.g., actin filaments, microtubules, and sickle-cell hemoglobin fibers. At high concentrations, excluded-volume interactions between the fibers lead to spontaneous alignment analogous to that in simple lyotropic liquid crystals. However, the phase behavior of reversibly associating proteins is complicated by the threefold coupling between the growth, alignment, and hydration of the fibers. In protein systems aggregates contain substantial solvent, which may cause them to swell or shrink, depending on osmotic stress. Extending previous work, we present a model for the equilibrium phase behavior of the above-noted protein systems in terms of simple intra- and interaggregate interactions, combined with equilibration of fiber-incorporated solvent with the bulk solvent. Specifically, we compare our model results to recent osmotic pressure data for sickle-cell hemoglobin and find excellent agreement. This comparison shows that particle interactions sufficient to cause alignment are also sufficient to squeeze significant amounts of solvent out of protein fibers. In addition, the model is in accord with findings from independent sedimentation and birefringence studies on sickle-cell hemoglobin.

  2. Production of organic compounds in plasmas: A comparison among electric sparks, laser-induced plasmas and UV light

    NASA Technical Reports Server (NTRS)

    Scattergood, T. W.; Mckay, C. P.; Borucki, W. J.; Giver, L. P.; Vanghyseghem, H.; Parris, J. E.; Miller, S. L.

    1991-01-01

    In order to study the production of organic compounds in plasmas (and shocks), various mixtures of N2, CH4, and H2, modeling the atmosphere of Titan, were exposed to discrete sparks, laser-induced plasmas (LIP) and ultraviolet light. The yields of HCN and simple hydrocarbons were measured and compared to those calculated from a simple quenched thermodynamic equilibrium model. The agreement between experiment and theory was fair for HCN and C2H2. However, the yields of C2H6 and other hydrocarbons were much higher than those predicted by the model. Our experiments suggest that photolysis by ultraviolet light from the plasma is an important process in the synthesis. This was confirmed by the photolysis of gas samples exposed to the light, but not to the plasma or shock waves. The results of these experiments demonstrate that, in addition to the well-known efficient synthesis of organic compounds in plasmas, the yields of saturated species, e.g., ethane, may be higher than predicted by theory and that LIP provide a convenient and clean way of simulating planetary lightning and impact plasmas in the laboratory.

  3. Comparison between real and modeled maregraphic data obtained using a simple dislocation model of the 27.02.2010 Chilian seismic source

    NASA Astrophysics Data System (ADS)

    Roger, J.; Simao, N.; Ruegg, J.-C.; Briole, P.; Allgeyer, S.

    2010-05-01

    On the 27th February 2010, a magnitude Mw=8.8 earthquake shook a wide part of Chile. It was the result of a release of energy due to a rupture on the subduction fault plane of the Pacific oceanic plate beneath the South-American plate. It generated a widespread tsunami that struck the whole Pacific Ocean Coasts. In addition to the numerous casualties and destructions fathered by the earthquake itself, the tsunami reached several meters high in some near-field locations inundating important urban areas (for example in Talcahano). In some far-field places as in the Marquesas Islands (FR), it reached several meters high too. This tsunami has been recorded by numerous coastal tide gages and DART buoys and, more particularly, some sea level records are available in the rupture area (Valparaiso, Talcahano, Arica, Ancud, Corral, Coquimbo). The aim of this study is to use a simple dislocation model determined from a moment tensor solution, aftershocks locations and GPS measurements, to calculate the initial offshore bottom deformation. This deformation is introduced in a tsunami propagation code to produce synthetic mareogramms on specific points that are compared to the real recorded maregraphic data.

  4. A simple model of solvent-induced symmetry-breaking charge transfer in excited quadrupolar molecules

    NASA Astrophysics Data System (ADS)

    Ivanov, Anatoly I.; Dereka, Bogdan; Vauthey, Eric

    2017-04-01

    A simple model has been developed to describe the symmetry-breaking of the electronic distribution of AL-D-AR type molecules in the excited state, where D is an electron donor and AL and AR are identical acceptors. The origin of this process is usually associated with the interaction between the molecule and the solvent polarization that stabilizes an asymmetric and dipolar state, with a larger charge transfer on one side than on the other. An additional symmetry-breaking mechanism involving the direct Coulomb interaction of the charges on the acceptors is proposed. At the same time, the electronic coupling between the two degenerate states, which correspond to the transferred charge being localised either on AL or AR, favours a quadrupolar excited state with equal amount of charge-transfer on both sides. Because of these counteracting effects, symmetry breaking is only feasible when the electronic coupling remains below a threshold value, which depends on the solvation energy and the Coulomb repulsion energy between the charges located on AL and AR. This model allows reproducing the solvent polarity dependence of the symmetry-breaking reported recently using time-resolved infrared spectroscopy.

  5. Chemical and Temperature Effects on Diffusion in a Model Polymer/Nanoparticle Composite

    NASA Astrophysics Data System (ADS)

    Janes, Dustin; Durning, Christopher

    Polymers and inks used in medical devices may be strengthened with nanoparticle fillers, so an understanding of how they may affect the release of residuals and additives via diffusion will help modernize biocompatibility testing. Transport of small molecules in polymers with increasing volume fraction of impermeable nanoparticles is often poorly predicted by the simple Maxwell model for heterogeneous media. In this presentation we will examine two diffusant classes, only one of which possesses hydrogen bonding interactions with the nanoparticle surface. Since similar reductions in mutual diffusion coefficients were observed in both cases we attribute the enhancement of the ''blocking effect'' in nanocomposites to a reduction in polymer mobility in the interfacial volume near the nanoparticle. The temperature and penetrant concentration dependence of the diffusion coefficients were examined in the context of a Vrentas-Duda free volume model that includes a thermally activated prefactor. While data obtained for rubbery poly(methyl acrylate) clearly obeys the expected Arrhenius scaling with EA = 11 kJ/mol, results for films containing d = 14 nm spherical silica nanoparticles do not, providing more evidence that polymer free volume is perturbed in unexpected ways even for conceptually simple systems. National Science Foundation IGERT Program, Pall Corporation.

  6. Bet-hedging as a complex interaction among developmental instability, environmental heterogeneity, dispersal, and life-history strategy.

    PubMed

    Scheiner, Samuel M

    2014-02-01

    One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.

  7. BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections

    NASA Astrophysics Data System (ADS)

    Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus

    2017-07-01

    Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.

  8. A review of some fish nutrition methodologies.

    PubMed

    Belal, Ibrahim E H

    2005-03-01

    Several classical warm blooded animal (poultry, sheep, cows, etc.) methods for dietary nutrients evaluation (digestibility, metabolizablity, and energy budget) are applied to fish, even though fish live in a different environment in addition to being cold blooded animals. These applications have caused significant errors that have made these methods non-additive and meaningless, as is explained in the text. In other words, dietary digestion and absorption could not adequately be measured due to the aquatic environment fish live in. Therefore, net nutrient deposition and/or growth are the only accurate measurement left to evaluate dietary nutrients intake in fish. In order to understand and predict dietary nutrient intake-growth response relationship, several mathematical models; (1) the simple linear equation, (2) the logarithmic equation, and (3) the quadratic equation are generally used. These models however, do not describe a full range of growth and have no biological meaning as explained in the text. On the other hand, a model called the saturation kinetic model. It has biological basis (the law of mass action and the enzyme kinetic) and it describes the full range of growth curve. Additionally, it has four parameters that summarize the growth curve and could also be used in comparing diets or nutrients effect on fish growth and/or net nutrient deposition. The saturation kinetic model is proposed to be adequate for dietary nutrient evaluation for fish. The theoretical derivation of this model is illustrated in the text.

  9. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  10. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  11. Understanding criminals' thinking: further examination of the Measure of Offender Thinking Styles-Revised.

    PubMed

    Mandracchia, Jon T; Morgan, Robert D

    2011-12-01

    The Measure of Offender Thinking Styles (MOTS) was originally developed to examine the structure of dysfunctional thinking exhibited by criminal offenders. In the initial investigation, a three-factor model of criminal thinking was obtained using the MOTS. These factors included dysfunctional thinking characterized as Control, Cognitive Immaturity, and Egocentrism. In the present investigation, the stability of the three-factor model was examined with a confirmatory factor analysis of the revised version of the MOTS (i.e., MOTS-R). In addition, the internal consistency, test-retest reliability, and convergent validity of the MOTS-R were examined. Results indicated that the three-factor model of criminal thinking was supported. In addition, the MOTS-R demonstrated reliability and convergent validity with other measures of criminal thinking and attitudes. Overall, it appears that the MOTS-R may prove to be a valuable tool for use with an offender population, particularly because of the simple, intuitive structure of dysfunctional thinking that it represents.

  12. On the Determination of Uncertainty and Limit of Detection in Label-Free Biosensors.

    PubMed

    Lavín, Álvaro; Vicente, Jesús de; Holgado, Miguel; Laguna, María F; Casquel, Rafael; Santamaría, Beatriz; Maigler, María Victoria; Hernández, Ana L; Ramírez, Yolanda

    2018-06-26

    A significant amount of noteworthy articles reviewing different label-free biosensors are being published in the last years. Most of the times, the comparison among the different biosensors is limited by the procedure used of calculating the limit of detection and the measurement uncertainty. This article clarifies and establishes a simple procedure to determine the calibration function and the uncertainty of the concentration measured at any point of the measuring interval of a generic label-free biosensor. The value of the limit of detection arises naturally from this model as the limit at which uncertainty tends when the concentration tends to zero. The need to provide additional information, such as the measurement interval and its linearity, among others, on the analytical systems and biosensor in addition to the detection limit is pointed out. Finally, the model is applied to curves that are typically obtained in immunoassays and a discussion is made on the application validity of the model and its limitations.

  13. Detection of greenhouse-gas-induced climatic change. Progress report, 1 December 1991--30 June 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wigley, T.M.L.; Jones, P.D.

    1994-07-01

    In addition to changes due to variations in greenhouse gas concentrations, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the enhanced greenhouse effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics. To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas concentration changes and other factors. Analyses will be guided bymore » a variety of models, from simple energy balance climate models to ocean General Circulation Models. Appendices A--G contain the following seven papers: (A) Recent global warmth moderated by the effects of the Mount Pinatubo eruption; (B) Recent warming in global temperature series; (C) Correlation methods in fingerprint detection studies; (D) Balancing the carbon budget. Implications for projections of future carbon dioxide concentration changes; (E) A simple model for estimating methane concentration and lifetime variations; (F) Implications for climate and sea level of revised IPCC emissions scenarios; and (G) Sulfate aerosol and climatic change.« less

  14. A simple method for assessment of muscle force, velocity, and power producing capacities from functional movement tasks.

    PubMed

    Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan

    2017-07-01

    A range of force (F) and velocity (V) data obtained from functional movement tasks (e.g., running, jumping, throwing, lifting, cycling) performed under variety of external loads have typically revealed strong and approximately linear F-V relationships. The regression model parameters reveal the maximum F (F-intercept), V (V-intercept), and power (P) producing capacities of the tested muscles. The aim of the present study was to evaluate the level of agreement between the routinely used "multiple-load model" and a simple "two-load model" based on direct assessment of the F-V relationship from only 2 external loads applied. Twelve participants were tested on the maximum performance vertical jumps, cycling, bench press throws, and bench pull performed against a variety of different loads. All 4 tested tasks revealed both exceptionally strong relationships between the parameters of the 2 models (median R = 0.98) and a lack of meaningful differences between their magnitudes (fixed bias below 3.4%). Therefore, addition of another load to the standard tests of various functional tasks typically conducted under a single set of mechanical conditions could allow for the assessment of the muscle mechanical properties such as the muscle F, V, and P producing capacities.

  15. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  16. Analyzing C2 Structures and Self-Synchronization with Simple Computational Models

    DTIC Science & Technology

    2011-06-01

    16th ICCRTS “Collective C2 in Multinational Civil-Military Operations” Analyzing C2 Structures and Self- Synchronization with Simple...Self- Synchronization with Simple Computational Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...models. The Kuramoto Model, though with some serious limitations, provides a representation of information flow and self- synchronization in an

  17. Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities

    ERIC Educational Resources Information Center

    Graff, Richard B.; Green, Gina

    2004-01-01

    Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…

  18. INCORPORATING ENVIRONMENTAL OUTCOMES INTO A HEALTH ECONOMIC MODEL.

    PubMed

    Marsh, Kevin; Ganz, Michael; Nørtoft, Emil; Lund, Niels; Graff-Zivin, Joshua

    2016-01-01

    Traditional economic evaluations for most health technology assessments (HTAs) have previously not included environmental outcomes. With the growing interest in reducing the environmental impact of human activities, the need to consider how to include environmental outcomes into HTAs has increased. We present a simple method of doing so. We adapted an existing clinical-economic model to include environmental outcomes (carbon dioxide [CO2] emissions) to predict the consequences of adding insulin to an oral antidiabetic (OAD) regimen for patients with type 2 diabetes mellitus (T2DM) over 30 years, from the United Kingdom payer perspective. Epidemiological, efficacy, healthcare costs, utility, and carbon emissions data were derived from published literature. A scenario analysis was performed to explore the impact of parameter uncertainty. The addition of insulin to an OAD regimen increases costs by 2,668 British pounds per patient and is associated with 0.36 additional quality-adjusted life-years per patient. The insulin-OAD combination regimen generates more treatment and disease management-related CO2 emissions per patient (1,686 kg) than the OAD-only regimen (310 kg), but generates fewer emissions associated with treating complications (3,019 kg versus 3,337 kg). Overall, adding insulin to OAD therapy generates an extra 1,057 kg of CO2 emissions per patient over 30 years. The model offers a simple approach for incorporating environmental outcomes into health economic analyses, to support a decision-maker's objective of reducing the environmental impact of health care. Further work is required to improve the accuracy of the approach; in particular, the generation of resource-specific environmental impacts.

  19. An Easy Tool to Predict Survival in Patients Receiving Radiation Therapy for Painful Bone Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westhoff, Paulien G., E-mail: p.g.westhoff@umcutrecht.nl; Graeff, Alexander de; Monninkhof, Evelyn M.

    2014-11-15

    Purpose: Patients with bone metastases have a widely varying survival. A reliable estimation of survival is needed for appropriate treatment strategies. Our goal was to assess the value of simple prognostic factors, namely, patient and tumor characteristics, Karnofsky performance status (KPS), and patient-reported scores of pain and quality of life, to predict survival in patients with painful bone metastases. Methods and Materials: In the Dutch Bone Metastasis Study, 1157 patients were treated with radiation therapy for painful bone metastases. At randomization, physicians determined the KPS; patients rated general health on a visual analogue scale (VAS-gh), valuation of life on amore » verbal rating scale (VRS-vl) and pain intensity. To assess the predictive value of the variables, we used multivariate Cox proportional hazard analyses and C-statistics for discriminative value. Of the final model, calibration was assessed. External validation was performed on a dataset of 934 patients who were treated with radiation therapy for vertebral metastases. Results: Patients had mainly breast (39%), prostate (23%), or lung cancer (25%). After a maximum of 142 weeks' follow-up, 74% of patients had died. The best predictive model included sex, primary tumor, visceral metastases, KPS, VAS-gh, and VRS-vl (C-statistic = 0.72, 95% CI = 0.70-0.74). A reduced model, with only KPS and primary tumor, showed comparable discriminative capacity (C-statistic = 0.71, 95% CI = 0.69-0.72). External validation showed a C-statistic of 0.72 (95% CI = 0.70-0.73). Calibration of the derivation and the validation dataset showed underestimation of survival. Conclusion: In predicting survival in patients with painful bone metastases, KPS combined with primary tumor was comparable to a more complex model. Considering the amount of variables in complex models and the additional burden on patients, the simple model is preferred for daily use. In addition, a risk table for survival is provided.« less

  20. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  1. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  2. Bayesian inference of uncertainties in precipitation-streamflow modeling in a snow affected catchment

    NASA Astrophysics Data System (ADS)

    Koskela, J. J.; Croke, B. W. F.; Koivusalo, H.; Jakeman, A. J.; Kokkonen, T.

    2012-11-01

    Bayesian inference is used to study the effect of precipitation and model structural uncertainty on estimates of model parameters and confidence limits of predictive variables in a conceptual rainfall-runoff model in the snow-fed Rudbäck catchment (142 ha) in southern Finland. The IHACRES model is coupled with a simple degree day model to account for snow accumulation and melt. The posterior probability distribution of the model parameters is sampled by using the Differential Evolution Adaptive Metropolis (DREAM(ZS)) algorithm and the generalized likelihood function. Precipitation uncertainty is taken into account by introducing additional latent variables that were used as multipliers for individual storm events. Results suggest that occasional snow water equivalent (SWE) observations together with daily streamflow observations do not contain enough information to simultaneously identify model parameters, precipitation uncertainty and model structural uncertainty in the Rudbäck catchment. The addition of an autoregressive component to account for model structure error and latent variables having uniform priors to account for input uncertainty lead to dubious posterior distributions of model parameters. Thus our hypothesis that informative priors for latent variables could be replaced by additional SWE data could not be confirmed. The model was found to work adequately in 1-day-ahead simulation mode, but the results were poor in the simulation batch mode. This was caused by the interaction of parameters that were used to describe different sources of uncertainty. The findings may have lessons for other cases where parameterizations are similarly high in relation to available prior information.

  3. A competitive binding model predicts the response of mammalian olfactory receptors to mixtures

    NASA Astrophysics Data System (ADS)

    Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay

    Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.

  4. Continuous In Vitro Evolution of a Ribozyme that Catalyzes Three Successive Nucleotidyl Addition Reactions

    NASA Technical Reports Server (NTRS)

    McGinness, Kathleen E.; Wright, Martin C.; Joyce, Gerald F.

    2002-01-01

    Variants of the class I ligase ribozyme, which catalyzes joining of the 3' end of a template bound oligonucleotide to its own 5' end, have been made to evolve in a continuous manner by a simple serial transfer procedure that can be carried out indefinitely. This process was expanded to allow the evolution of ribozymes that catalyze three successive nucleotidyl addition reactions, two template-directed mononucleotide additions followed by RNA ligation. During the development of this behavior, a population of ribozymes was maintained against an overall dilution of more than 10(exp 406). The resulting ribozymes were capable of catalyzing the three-step reaction pathway, with nucleotide addition occurring in either a 5' yieldig 3' or a 3' yielding 5' direction. This purely chemical system provides a functional model of a multi-step reaction pathway that is undergoing Darwinian evolution.

  5. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models.

    PubMed

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-05-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  6. Pore and grain boundary migration under a temperature gradient: A phase-field model study

    DOE PAGES

    Biner, S. B.

    2016-03-16

    In this study, the collective migration behavior of pores and grain boundaries under a temperature gradient is studied for simple single crystal, bi-crystal and polycrystal configurations with a phase-field model formulism. For simulation of the microstructure of solids, composed of pores and grain boundaries, the results indicate that not only the volume fraction of pores, but also its spatial partitioning between the grain boundary junctions and the grain boundary segments appears to be important. In addition to various physical properties, the evolution kinetics, under given temperature gradients, will be strongly influenced with the initial morphology of a poly-crystalline microstructure.

  7. Current collection from the space plasma through defects in solar array insulation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Stillwell, R. P.; Kaufman, H. R.

    1985-01-01

    Operating high-voltage solar arrays in the space environment can result in anomalously large currents being collected through small insulation defects. Tests simulating the electron collection have shown that there are two major collection modes. The first involves current enhancement by means of a surface phenomenon involving secondary electron emission from the surrounding insulator. In the second mode, the current collection is enhanced by vaporization and ionization of the insulator material, in addition to the surface enhancement of the first mode. The electron collection due to surface enhancement (first mode) has been modeled. Using this model, simple calculations yield realistic predictions.

  8. fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization

    NASA Astrophysics Data System (ADS)

    Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda

    2010-03-01

    Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.

  9. Structural and magnetic correlation of Finemet alloys with Ge addition

    NASA Astrophysics Data System (ADS)

    Muraca, D.; Cremaschi, V.; Moya, J.; Sirkin, H.

    The correlation between saturation magnetization and the magnetic moment per Fe atom in the nanocrystalline state is studied for Finemet-type alloys. These studies were performed on nanocrystalline ribbons whose compositions were Fe 73.5Si 13.5-xGe xNb 3B 9Cu 1 ( x=8, 10 and 13.5 at%). We used a simple lineal model, X-ray diffraction and Mössbauer spectroscopy data to calculate the magnetic contribution of the nanocrystals and the results were contrasted with the measured saturation magnetization of the different alloys. The technique presented here provides a very simple and powerful tool to compute the magnetic contribution of the nanocrystalline phase to the alloy. This calculus could be used to determine the volume fraction of nanocrystalline and amorphous phases in the nanocrystallized alloy, without using a very sophisticated microscopy method.

  10. Structure of velocity distributions in shock waves in granular gases with extension to molecular gases.

    PubMed

    Vilquin, A; Boudet, J F; Kellay, H

    2016-08-01

    Velocity distributions in normal shock waves obtained in dilute granular flows are studied. These distributions cannot be described by a simple functional shape and are believed to be bimodal. Our results show that these distributions are not strictly bimodal but a trimodal distribution is shown to be sufficient. The usual Mott-Smith bimodal description of these distributions, developed for molecular gases, and based on the coexistence of two subpopulations (a supersonic and a subsonic population) in the shock front, can be modified by adding a third subpopulation. Our experiments show that this additional population results from collisions between the supersonic and subsonic subpopulations. We propose a simple approach incorporating the role of this third intermediate population to model the measured probability distributions and apply it to granular shocks as well as shocks in molecular gases.

  11. Teaching room acoustics as a product sound quality issue

    NASA Astrophysics Data System (ADS)

    Kleiner, Mendel; Vastfjall, Daniel

    2003-04-01

    The department of Applied Acoustics teaches engineering and architect students at Chalmers University of Technology. The teaching of room acoustics to architectural students has been under constant development under several years and is now based on the study of room acoustics as a product sound quality issue. Various listening sessions using binaural sound recording and reproduction is used to focus students' learning on simple, easy to remember concepts. Computer modeling using ray tracing software and auralization is also used extensively as a tool to demonstrate concepts in addition to other software for simple sound generation and manipulation. Sound in general is the focus of an interdisciplinary course for students from Chalmers as well as from a school of art, a school of design, and a school of music which offers particular challenges and which is almost all listening based.

  12. Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds

    DOE PAGES

    Yang, Fan; Luke, Edward P.; Kollias, Pavlos; ...

    2018-04-20

    Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less

  13. λ-Repressor Oligomerization Kinetics at High Concentrations Using Fluorescence Correlation Spectroscopy in Zero-Mode Waveguides

    PubMed Central

    Samiee, K. T.; Foquet, M.; Guo, L.; Cox, E. C.; Craighead, H. G.

    2005-01-01

    Fluorescence correlation spectroscopy (FCS) has demonstrated its utility for measuring transport properties and kinetics at low fluorophore concentrations. In this article, we demonstrate that simple optical nanostructures, known as zero-mode waveguides, can be used to significantly reduce the FCS observation volume. This, in turn, allows FCS to be applied to solutions with significantly higher fluorophore concentrations. We derive an empirical FCS model accounting for one-dimensional diffusion in a finite tube with a simple exponential observation profile. This technique is used to measure the oligomerization of the bacteriophage λ repressor protein at micromolar concentrations. The results agree with previous studies utilizing conventional techniques. Additionally, we demonstrate that the zero-mode waveguides can be used to assay biological activity by measuring changes in diffusion constant as a result of ligand binding. PMID:15613638

  14. Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Fan; Luke, Edward P.; Kollias, Pavlos

    Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less

  15. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  16. Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.

    PubMed

    Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H

    2008-05-01

    The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.

  17. Parametric Instability of Static Shafts-Disk System Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Wahab, A. M.; Rasid, Z. A.; Abu, A.

    2017-10-01

    Parametric instability condition is an important consideration in design process as it can cause failure in machine elements. In this study, parametric instability behaviour was studied for a simple shaft and disk system that was subjected to axial load under pinned-pinned boundary condition. The shaft was modelled based on the Nelson’s beam model, which considered translational and rotary inertias, transverse shear deformation and torsional effect. The Floquet’s method was used to estimate the solution for Mathieu equation. Finite element codes were developed using MATLAB to establish the instability chart. The effect of additional disk mass on the stability chart was investigated for pinned-pinned boundary conditions. Numerical results and illustrative examples are given. It is found that the additional disk mass decreases the instability region during static condition. The location of the disk as well has significant effect on the instability region of the shaft.

  18. Kinetic analysis of competition between aerosol particle removal and generation by ionization air purifiers.

    PubMed

    Alshawa, Ahmad; Russell, Ashley R; Nizkorodov, Sergey A

    2007-04-01

    Ionization air purifiers are increasingly used to remove aerosol particles from indoor air. However, certain ionization air purifiers also emit ozone. Reactions between the emitted ozone and unsaturated volatile organic compounds (VOC) commonly found in indoor air produce additional respirable aerosol particles in the ultrafine (<0.1 microm) and fine (<2.5 microm) size domains. A simple kinetic model is used to analyze the competition between the removal and generation of particulate matter by ionization air purifiers under conditions of a typical residential building. This model predicts that certain widely used ionization air purifiers may actually increase the mass concentration of fine and ultrafine particulates in the presence of common unsaturated VOC, such as limonene contained in many household cleaning products. This prediction is supported by an explicit observation of ultrafine particle nucleation events caused by the addition of D-limonene to a ventilated office room equipped with a common ionization air purifier.

  19. Perspectives on scaling and multiscaling in passive scalar turbulence

    NASA Astrophysics Data System (ADS)

    Banerjee, Tirthankar; Basu, Abhik

    2018-05-01

    We revisit the well-known problem of multiscaling in substances passively advected by homogeneous and isotropic turbulent flows or passive scalar turbulence. To that end we propose a two-parameter continuum hydrodynamic model for an advected substance concentration θ , parametrized jointly by y and y ¯, that characterize the spatial scaling behavior of the variances of the advecting stochastic velocity and the stochastic additive driving force, respectively. We analyze it within a one-loop dynamic renormalization group method to calculate the multiscaling exponents of the equal-time structure functions of θ . We show how the interplay between the advective velocity and the additive force may lead to simple scaling or multiscaling. In one limit, our results reduce to the well-known results from the Kraichnan model for passive scalar. Our framework of analysis should be of help for analytical approaches for the still intractable problem of fluid turbulence itself.

  20. Advanced waste management technology evaluation

    NASA Technical Reports Server (NTRS)

    Couch, H.; Birbara, P.

    1996-01-01

    The purpose of this program is to evaluate the feasibility of steam reforming spacecraft wastes into simple recyclable inorganic salts, carbon dioxide and water. Model waste compounds included cellulose, urea, methionine, Igapon TC-42, and high density polyethylenes. These are compounds found in urine, feces, hygiene water, etc. The gasification and steam reforming process used the addition of heat and low quantities of oxygen to oxidize and reduce the model compounds.The studied reactions were aimed at recovery of inorganic residues that can be recycled into a closed biologic system. Results indicate that even at very low concentrations of oxygen (less than 3%) the formation of a carbonaceous residue was suppressed. The use of a nickel/cobalt reforming catalyst at reaction temperature of 1600 degrees yielded an efficient destruction of the organic effluents, including methane and ammonia. Additionally, the reforming process with nickel/cobalt catalyst diminished the noxious odors associated with butyric acid, methionine and plastics.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahore, Ritu; Peebles, Cameron; Abraham, Daniel P.

    Li 1.03(Ni 0.5Mn 0.3Co 0.2) 0.97O 2 (NMC)-based coin cells containing the electrolyte additives vinylene carbonate (VC) and tris(trimethylsilyl)phosphite (TMSPi) in the range of 0-2 wt% were cycled between 3.0 and 4.4 V. The changes in capacity at rates of C/10 and C/1 and resistance at 60% state of charge were found to follow linear-with-time kinetic rate laws. Further, the C/10 capacity and resistance data were amenable to modeling by a statistics of mixtures approach. Applying physical meaning to the terms in the empirical models indicated that the interactions between the electrolyte and additives were not simple. For example, theremore » were strong, synergistic interactions between VC and TMSPi affecting C/10 capacity loss, as expected, but there were other, more subtle interactions between the electrolyte components. In conclusion, the interactions between these components controlled the C/10 capacity decline and resistance increase.« less

  2. Correlation of recent fission product release data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less

  3. A powerful and flexible approach to the analysis of RNA sequence count data

    PubMed Central

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.

    2011-01-01

    Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900

  4. Numerical model of solar dynamic radiator for parametric analysis

    NASA Technical Reports Server (NTRS)

    Rhatigan, Jennifer L.

    1989-01-01

    Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.

  5. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009).

    PubMed

    Nishiura, Hiroshi

    2011-02-16

    Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.

  6. Simplified and quick electrical modeling for dye sensitized solar cells: An experimental and theoretical investigation

    NASA Astrophysics Data System (ADS)

    de Andrade, Rocelito Lopes; de Oliveira, Matheus Costa; Kohlrausch, Emerson Cristofer; Santos, Marcos José Leite

    2018-05-01

    This work presents a new and simple method for determining IPH (current source dependent on luminance), I0 (reverse saturation current), n (ideality factor), RP and RS, (parallel and series resistance) to build an electrical model for dye sensitized solar cells (DSSCs). The electrical circuit parameters used in the simulation and to generate theoretical curves for the single diode electrical model were extracted from I-V curves of assembled DSSCs. Model validation was performed by assembling five different types of DSSCs and evaluating the following parameters: effect of a TiO2 blocking/adhesive layer, thickness of the TiO2 layer and the presence of a light scattering layer. In addition, irradiance, temperature, series and parallel resistance, ideality factor and reverse saturation current were simulated.

  7. Emergent organization in a model market

    NASA Astrophysics Data System (ADS)

    Yadav, Avinash Chand; Manchanda, Kaustubh; Ramaswamy, Ramakrishna

    2017-09-01

    We study the collective behaviour of interacting agents in a simple model of market economics that was originally introduced by Nørrelykke and Bak. A general theoretical framework for interacting traders on an arbitrary network is presented, with the interaction consisting of buying (namely consumption) and selling (namely production) of commodities. Extremal dynamics is introduced by having the agent with least profit in the market readjust prices, causing the market to self-organize. In addition to examining this model market on regular lattices in two-dimensions, we also study the cases of random complex networks both with and without community structures. Fluctuations in an activity signal exhibit properties that are characteristic of avalanches observed in models of self-organized criticality, and these can be described by power-law distributions when the system is in the critical state.

  8. Detailed emission profiles for on-road vehicles derived from ambient measurements during a windless traffic episode in Baltimore using a multi-model approach

    NASA Astrophysics Data System (ADS)

    Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.

    2013-12-01

    Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.

  9. Integrability and superintegrability of the generalized n-level many-mode Jaynes-Cummings and Dicke models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skrypnyk, T.

    2009-10-15

    We analyze symmetries of the integrable generalizations of Jaynes-Cummings and Dicke models associated with simple Lie algebras g and their reductive subalgebras g{sub K}[T. Skrypnyk, 'Generalized n-level Jaynes-Cummings and Dicke models, classical rational r-matrices and nested Bethe ansatz', J. Phys. A: Math. Theor. 41, 475202 (2008)]. We show that their symmetry algebras contain commutative subalgebras isomorphic to the Cartan subalgebras of g, which can be added to the commutative algebras of quantum integrals generated with the help of the quantum Lax operators. We diagonalize additional commuting integrals and constructed with their help the most general integrable quantum Hamiltonian of themore » generalized n-level many-mode Jaynes-Cummings and Dicke-type models using nested algebraic Bethe ansatz.« less

  10. Chromosomes, conflict, and epigenetics: chromosomal speciation revisited.

    PubMed

    Brown, Judith D; O'Neill, Rachel J

    2010-01-01

    Since Darwin first noted that the process of speciation was indeed the "mystery of mysteries," scientists have tried to develop testable models for the development of reproductive incompatibilities-the first step in the formation of a new species. Early theorists proposed that chromosome rearrangements were implicated in the process of reproductive isolation; however, the chromosomal speciation model has recently been questioned. In addition, recent data from hybrid model systems indicates that simple epistatic interactions, the Dobzhansky-Muller incompatibilities, are more complex. In fact, incompatibilities are quite broad, including interactions among heterochromatin, small RNAs, and distinct, epigenetically defined genomic regions such as the centromere. In this review, we will examine both classical and current models of chromosomal speciation and describe the "evolving" theory of genetic conflict, epigenetics, and chromosomal speciation.

  11. Symmetric Fold/Super-Hopf Bursting, Chaos and Mixed-Mode Oscillations in Pernarowski Model of Pancreatic Beta-Cells

    NASA Astrophysics Data System (ADS)

    Fallah, Haniyeh

    Pancreatic beta-cells produce insulin to regularize the blood glucose level. Bursting is important in beta cells due to its relation to the release of insulin. Pernarowski model is a simple polynomial model of beta-cell activities indicating bursting oscillations in these cells. This paper presents bursting behaviors of symmetric type in this model. In addition, it is shown that the current system exhibits the phenomenon of period doubling cascades of canards which is a route to chaos. Canards are also observed symmetrically near folds of slow manifold which results in a chaotic transition between n and n + 1 spikes symmetric bursting. Furthermore, mixed-mode oscillations (MMOs) and combination of symmetric bursting together with MMOs are illustrated during the transition between symmetric bursting and continuous spiking.

  12. Manual lateralization in macaques: handedness, target laterality and task complexity.

    PubMed

    Regaiolli, Barbara; Spiezio, Caterina; Vallortigara, Giorgio

    2016-01-01

    Non-human primates represent models to understand the evolution of handedness in humans. Despite several researches have been investigating non-human primates handedness, few studies examined the relationship between target position, hand preference and task complexity. This study aimed at investigating macaque handedness in relation to target laterality and tastiness, as well as task complexity. Seven pig-tailed macaques (Macaca nemestrina) were involved in three different "two alternative choice" tests: one low-level task and two high-level tasks (HLTs). During the first and the third tests macaques could select a preferred food and a non-preferred food, whereas by modifying the design of the second test, macaques were presented with no-difference alternative per trial. Furthermore, a simple-reaching test was administered to assess hand preference in a social context. Macaques showed hand preference at individual level both in simple and complex tasks, but not in the simple-reaching test. Moreover, target position seemed to affect hand preference in retrieving an object in the low-level task, but not in the HLT. Additionally, individual hand preference seemed to be affected from the tastiness of the item to be retrieved. The results suggest that both target laterality and individual motivation might influence hand preference of macaques, especially in simple tasks.

  13. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    NASA Astrophysics Data System (ADS)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  14. Simple liquid models with corrected dielectric constants

    PubMed Central

    Fennell, Christopher J.; Li, Libo; Dill, Ken A.

    2012-01-01

    Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577

  15. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    NASA Astrophysics Data System (ADS)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  16. Simple yet effective: Historical proximity variables improve the species distribution models for invasive giant hogweed (Heracleum mantegazzianum s.l.) in Poland.

    PubMed

    Mędrzycki, Piotr; Jarzyna, Ingeborga; Obidziński, Artur; Tokarska-Guzik, Barbara; Sotek, Zofia; Pabjanek, Piotr; Pytlarczyk, Adam; Sachajdakiewicz, Izabela

    2017-01-01

    Species distribution models are scarcely applicable to invasive species because of their breaking of the models' assumptions. So far, few mechanistic, semi-mechanistic or statistical solutions like dispersal constraints or propagule limitation have been applied. We evaluated a novel quasi-semi-mechanistic approach for regional scale models, using historical proximity variables (HPV) representing a state of the population in a given moment in the past. Our aim was to test the effects of addition of HPV sets of different minimal recentness, information capacity and the total number of variables on the quality of the species distribution model for Heracleum mantegazzianum on 116000 km2 in Poland. As environmental predictors, we used fragments of 103 1×1 km, world- wide, free-access rasters from WorldGrids.org. Single and ensemble models were computed using BIOMOD2 package 3.1.47 working in R environment 3.1.0. The addition of HPV improved the quality of single and ensemble models from poor to good and excellent. The quality was the highest for the variants with HPVs based on the distance from the most recent past occurrences. It was mostly affected by the algorithm type, but all HPV traits (minimal recentness, information capacity, model type or the number of the time periods) were significantly important determinants. The addition of HPVs improved the quality of current projections, raising the occurrence probability in regions where the species had occurred before. We conclude that HPV addition enables semi-realistic estimation of the rate of spread and can be applied to the short-term forecasting of invasive or declining species, which also break equal-dispersal probability assumptions.

  17. Advanced electric propulsion research, 1991

    NASA Technical Reports Server (NTRS)

    Monheiser, Jeffery M.

    1992-01-01

    A simple model for the production of ions that impinge on and sputter erode the accelerator grid of an ion thruster is presented. Charge-exchange and electron-impact ion production processes are considered, but initial experimental results suggest the charge-exchange process dominates. Additional experimental results show the effects of changes in thruster operating conditions on the length of the region from which these ions are drawn upstream into the grid. Results which show erosion patterns and indicate molybdenum accelerator grids erode more rapidly than graphite ones are also presented.

  18. Kicked-Harper model versus on-resonance double-kicked rotor model: From spectral difference to topological equivalence

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Ho, Derek Y. H.; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin

    2013-11-01

    Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ˜KN+2, where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that the KHM and ORDKR model have exactly the same classical limit after a simple canonical transformation.

  19. Exploring global carbon turnover and radiocarbon cycling in terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Graven, H. D.; Warren, H.

    2017-12-01

    The uptake of carbon into terrestrial ecosystems through net primary productivity (NPP) and the turnover of that carbon through various pathways are the fundamental drivers of changing carbon stocks on land, in addition to human-induced and natural disturbances. Terrestrial biosphere models use different formulations for carbon uptake and release, resulting in a range of values in NPP of 40-70 PgC/yr and biomass turnover times of about 25-40 years for the preindustrial period in current-generation models from CMIP5. Biases in carbon uptake and turnover impact simulated carbon uptake and storage in the historical period and later in the century under changing climate and CO2 concentration, however evaluating global-scale NPP and carbon turnover is challenging. Scaling up of plot-scale measurements involves uncertainty due to the large heterogeneity across ecosystems and biomass types, some of which are not well-observed. We are developing the modelling of radiocarbon in terrestrial biosphere models, with a particular focus on decadal 14C dynamics after the nuclear weapons testing in the 1950s-60s, including the impact of carbon flux trends and variability on 14C cycling. We use an estimate of the total inventory of excess 14C in the biosphere constructed by Naegler and Levin (2009) using a 14C budget approach incorporating estimates of total 14C produced by the weapons tests and atmospheric and oceanic 14C observations. By simulating radiocarbon in simple biosphere box models using carbon fluxes from the CMIP5 models, we find that carbon turnover is too rapid in many of the simple models - the models appear to take up too much 14C and release it too quickly. Therefore many CMIP5 models may also simulate carbon turnover that is too rapid. A caveat is that the simple box models we use may not adequately represent carbon dynamics in the full-scale models. Explicit simulation of radiocarbon in terrestrial biosphere models would allow more robust evaluation of biosphere models and the investigation of climate-carbon cycle feedbacks on various timescales. Explicit simulation of radiocarbon and carbon-13 in terrestrial biosphere models of Earth System Models, as well as in ocean models, is recommended by CMIP6 and supported by CMIP6 protocols and forcing datasets.

  20. Thermal performance modeling of NASA s scientific balloons

    NASA Astrophysics Data System (ADS)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.

  1. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  2. A simple geometrical model describing shapes of soap films suspended on two rings

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  3. A description of rotations for DEM models of particle systems

    NASA Astrophysics Data System (ADS)

    Campello, Eduardo M. B.

    2015-06-01

    In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.

  4. Neuritogenesis: A model for space radiation effects on the central nervous system

    NASA Technical Reports Server (NTRS)

    Vazquez, M. E.; Broglio, T. M.; Worgul, B. V.; Benton, E. V.

    1994-01-01

    Pivotal to the astronauts' functional integrity and survival during long space flights are the strategies to deal with space radiations. The majority of the cellular studies in this area emphasize simple endpoints such as growth related events which, although useful to understand the nature of primary cell injury, have poor predictive value for extrapolation to more complex tissues such as the central nervous system (CNS). In order to assess the radiation damage on neural cell populations, we developed an in vitro model in which neuronal differentiation, neurite extension, and synaptogenesis occur under controlled conditions. The model exploits chick embryo neural explants to study the effects of radiations on neuritogenesis. In addition, neurobiological problems associated with long-term space flights are discussed.

  5. Dynamical boson stars.

    PubMed

    Liebling, Steven L; Palenzuela, Carlos

    2017-01-01

    The idea of stable, localized bundles of energy has strong appeal as a model for particles. In the 1950s, John Wheeler envisioned such bundles as smooth configurations of electromagnetic energy that he called geons , but none were found. Instead, particle-like solutions were found in the late 1960s with the addition of a scalar field, and these were given the name boson stars . Since then, boson stars find use in a wide variety of models as sources of dark matter, as black hole mimickers, in simple models of binary systems, and as a tool in finding black holes in higher dimensions with only a single Killing vector. We discuss important varieties of boson stars, their dynamic properties, and some of their uses, concentrating on recent efforts.

  6. Cluster dynamics and cluster size distributions in systems of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Peruani, F.; Schimansky-Geier, L.; Bär, M.

    2010-12-01

    Systems of self-propelled particles (SPP) interacting by a velocity alignment mechanism in the presence of noise exhibit rich clustering dynamics. Often, clusters are responsible for the distribution of (local) information in these systems. Here, we investigate the properties of individual clusters in SPP systems, in particular the asymmetric spreading behavior of clusters with respect to their direction of motion. In addition, we formulate a Smoluchowski-type kinetic model to describe the evolution of the cluster size distribution (CSD). This model predicts the emergence of steady-state CSDs in SPP systems. We test our theoretical predictions in simulations of SPP with nematic interactions and find that our simple kinetic model reproduces qualitatively the transition to aggregation observed in simulations.

  7. Probing the exchange statistics of one-dimensional anyon models

    NASA Astrophysics Data System (ADS)

    Greschner, Sebastian; Cardarelli, Lorenzo; Santos, Luis

    2018-05-01

    We propose feasible scenarios for revealing the modified exchange statistics in one-dimensional anyon models in optical lattices based on an extension of the multicolor lattice-depth modulation scheme introduced in [Phys. Rev. A 94, 023615 (2016), 10.1103/PhysRevA.94.023615]. We show that the fast modulation of a two-component fermionic lattice gas in the presence a magnetic field gradient, in combination with additional resonant microwave fields, allows for the quantum simulation of hardcore anyon models with periodic boundary conditions. Such a semisynthetic ring setup allows for realizing an interferometric arrangement sensitive to the anyonic statistics. Moreover, we show as well that simple expansion experiments may reveal the formation of anomalously bound pairs resulting from the anyonic exchange.

  8. The Role of Breccia Lenses in Regolith Generation From the Formation of Small, Simple Craters: Application to the Apollo 15 Landing Site

    NASA Astrophysics Data System (ADS)

    Hirabayashi, M.; Howl, B. A.; Fassett, C. I.; Soderblom, J. M.; Minton, D. A.; Melosh, H. J.

    2018-02-01

    Impact cratering is likely a primary agent of regolith generation on airless bodies. Regolith production via impact cratering has long been a key topic of study since the Apollo era. The evolution of regolith due to impact cratering, however, is not well understood. A better formulation is needed to help quantify the formation mechanism and timescale of regolith evolution. Here we propose an analytically derived stochastic model that describes the evolution of regolith generated by small, simple craters. We account for ejecta blanketing as well as regolith infilling of the transient crater cavity. Our results show that the regolith infilling plays a key role in producing regolith. Our model demonstrates that because of the stochastic nature of impact cratering, the regolith thickness varies laterally, which is consistent with earlier work. We apply this analytical model to the regolith evolution at the Apollo 15 site. The regolith thickness is computed considering the observed crater size-frequency distribution of small, simple lunar craters (< 381 m in radius for ejecta blanketing and <100 m in radius for the regolith infilling). Allowing for some amount of regolith coming from the outside of the area, our result is consistent with an empirical result from the Apollo 15 seismic experiment. Finally, we find that the timescale of regolith growth is longer than that of crater equilibrium, implying that even if crater equilibrium is observed on a cratered surface, it is likely that the regolith thickness is still evolving due to additional impact craters.

  9. Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences

    NASA Astrophysics Data System (ADS)

    Peng, Yingjie

    2012-01-01

    The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.

  10. Deep-down ionization of protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Glassgold, A. E.; Lizano, S.; Galli, D.

    2017-12-01

    The possible occurrence of dead zones in protoplanetary discs subject to the magneto-rotational instability highlights the importance of disc ionization. We present a closed-form theory for the deep-down ionization by X-rays at depths below the disc surface dominated by far-ultraviolet radiation. Simple analytic solutions are given for the major ion classes, electrons, atomic ions, molecular ions and negatively charged grains. In addition to the formation of molecular ions by X-ray ionization of H2 and their destruction by dissociative recombination, several key processes that operate in this region are included, e.g. charge exchange of molecular ions and neutral atoms and destruction of ions by grains. Over much of the inner disc, the vertical decrease in ionization with depth into the disc is described by simple power laws, which can easily be included in more detailed modelling of magnetized discs. The new ionization theory is used to illustrate the non-ideal magnetohydrodynamic effects of Ohmic, Hall and Ambipolar diffusion for a magnetic model of a T Tauri star disc using the appropriate Elsasser numbers.

  11. A simulation assessment of the thermodynamics of dense ion-dipole mixtures with polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastea, Sorin, E-mail: sbastea@llnl.gov

    Molecular dynamics (MD) simulations are employed to ascertain the relative importance of various electrostatic interaction contributions, including induction interactions, to the thermodynamics of dense, hot ion-dipole mixtures. In the absence of polarization, we find that an MD-constrained free energy term accounting for the ion-dipole interactions, combined with well tested ionic and dipolar contributions, yields a simple, fairly accurate free energy form that may be a better option for describing the thermodynamics of such mixtures than the mean spherical approximation (MSA). Polarization contributions induced by the presence of permanent dipoles and ions are found to be additive to a good approximation,more » simplifying the thermodynamic modeling. We suggest simple free energy corrections that account for these two effects, based in part on standard perturbative treatments and partly on comparisons with MD simulation. Even though the proposed approximations likely need further study, they provide a first quantitative assessment of polarization contributions at high densities and temperatures and may serve as a guide for future modeling efforts.« less

  12. Gravitational spreading of Danu, Freyja and Maxwell Montes, Venus

    NASA Astrophysics Data System (ADS)

    Smrekar, Suzanne E.; Solomon, Sean C.

    1991-06-01

    The potential energy of elevated terrain tends to drive the collapse of the topography. This process of gravitational spreading is likely to be more important on Venus than on Earth because the higher surface temperature weakens the crust. The highest topography on Venus is Ishtar Terra. The high plateau of Lakshmi Planum has an average elevation of 3 km above mean planetary radius, and is surrounded by mountain belts. Freyja, Danu, and Maxwell Montes rise, on average, an additional 3, 0.5, and 5 km above the plateau, respectively. Recent high resolution Magellan radar images of this area, east of approx. 330 deg E, reveal widespread evidence for gravity spreading. Some observational evidence is described for gravity spreading and the implications are discussed in terms of simple mechanical models. Several simple models predict that gravity spreading should be an important process on Venus. One difficulty in using remote observations to infer interior properties is that the observed features may not have formed in response to stresses which are still active. Several causes of surface topography are briefly examined.

  13. Rotation of a spheroid in a Couette flow at moderate Reynolds numbers.

    PubMed

    Yu, Zhaosheng; Phan-Thien, Nhan; Tanner, Roger I

    2007-08-01

    The rotation of a single spheroid in a planar Couette flow as a model for simple shear flow is numerically simulated with the distributed Lagrangian multiplier based fictitious domain method. The study is focused on the effects of inertia on the orbital behavior of prolate and oblate spheroids. The numerical orbits are found to be well described by a simple empirical model, which states that the rate of the spheroid rotation about the vorticity axis is a sinusoidal function of the corresponding projection angle in the flow-gradient plane, and that the exponential growth rate of the orbit function is a constant. The following transitions in the steady state with increasing Reynolds number are identified: Jeffery orbit, tumbling, quasi-Jeffery orbit, log rolling, and inclined rolling for a prolate spheroid; and Jeffery orbit, log rolling, inclined rolling, and motionless state for an oblate spheroid. In addition, it is shown that the orbit behavior is sensitive to the initial orientation in the case of strong inertia and there exist different steady states for certain shear Reynolds number regimes.

  14. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  15. Transparent Helium in Stripped Envelope Supernovae

    NASA Astrophysics Data System (ADS)

    Piro, Anthony L.; Morozova, Viktoriya S.

    2014-09-01

    Using simple arguments based on photometric light curves and velocity evolution, we propose that some stripped envelope supernovae (SNe) show signs that a significant fraction of their helium is effectively transparent. The main pieces of evidence are the relatively low velocities with little velocity evolution, as are expected deep inside an exploding star, along with temperatures that are too low to ionize helium. This means that the helium should not contribute to the shaping of the main SN light curve, and thus the total helium mass may be difficult to measure from simple light curve modeling. Conversely, such modeling may be more useful for constraining the mass of the carbon/oxygen core of the SN progenitor. Other stripped envelope SNe show higher velocities and larger velocity gradients, which require an additional opacity source (perhaps the mixing of heavier elements or radioactive nickel) to prevent the helium from being transparent. We discuss ways in which similar analysis can provide insights into the differences and similarities between SNe Ib and Ic, which will lead to a better understanding of their respective formation mechanisms.

  16. Machine learning methods for locating re-entrant drivers from electrograms in a model of atrial fibrillation

    NASA Astrophysics Data System (ADS)

    McGillivray, Max Falkenberg; Cheng, William; Peters, Nicholas S.; Christensen, Kim

    2018-04-01

    Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.

  17. Drop formation, pinch-off dynamics and liquid transfer of simple and complex fluids

    NASA Astrophysics Data System (ADS)

    Dinic, Jelena; Sharma, Vivek

    Liquid transfer and drop formation processes underlying jetting, spraying, coating, and printing - inkjet, screen, roller-coating, gravure, nanoimprint hot embossing, 3D - often involve formation of unstable columnar necks. Capillary-driven thinning of such necks and their pinchoff dynamics are determined by a complex interplay of inertial, viscous and capillary stresses for simple, Newtonian fluids. Micro-structural changes in response to extensional flow field that arises within the thinning neck give rise to additional viscoelastic stresses in complex, non- Newtonian fluids. Using FLOW-3D, we simulate flows realized in prototypical geometries (dripping and liquid bridge stretched between two parallel plates) used for studying pinch-off dynamics and influence of microstructure and viscoelasticity. In contrast with often-used 1D or 2D models, FLOW-3D allows a robust evaluation of the magnitude of the underlying stresses and extensional flow field (both uniformity and magnitude). We find that the simulated radius evolution profiles match the pinch-off dynamics that are experimentally-observed and theoretically-predicted for model Newtonian fluids and complex fluids.

  18. Design of flat pneumatic artificial muscles

    NASA Astrophysics Data System (ADS)

    Wirekoh, Jackson; Park, Yong-Lae

    2017-03-01

    Pneumatic artificial muscles (PAMs) have gained wide use in the field of robotics due to their ability to generate linear forces and motions with a simple mechanism, while remaining lightweight and compact. However, PAMs are limited by their traditional cylindrical form factors, which must increase radially to improve contraction force generation. Additionally, this form factor results in overly complicated fabrication processes when embedded fibers and sensor elements are required to provide efficient actuation and control of the PAMs while minimizing the bulkiness of the overall robotic system. In order to overcome these limitations, a flat two-dimensional PAM capable of being fabricated using a simple layered manufacturing process was created. Furthermore, a theoretical model was developed using Von Karman’s formulation for large deformations and the energy methods. Experimental characterizations of two different types of PAMs, a single-cell unit and a multi-cell unit, were performed to measure the maximum contraction lengths and forces at input pressures ranging from 0 to 150 kPa. Experimental data were then used to verify the fidelity of the theoretical model.

  19. Inference of mantle viscosity for depth resolutions of GIA observations

    NASA Astrophysics Data System (ADS)

    Nakada, Masao; Okuno, Jun'ichi

    2016-11-01

    Inference of the mantle viscosity from observations for glacial isostatic adjustment (GIA) process has usually been conducted through the analyses based on the simple three-layer viscosity model characterized by lithospheric thickness, upper- and lower-mantle viscosities. Here, we examine the viscosity structures for the simple three-layer viscosity model and also for the two-layer lower-mantle viscosity model defined by viscosities of η670,D (670-D km depth) and ηD,2891 (D-2891 km depth) with D-values of 1191, 1691 and 2191 km. The upper-mantle rheological parameters for the two-layer lower-mantle viscosity model are the same as those for the simple three-layer one. For the simple three-layer viscosity model, rate of change of degree-two zonal harmonics of geopotential due to GIA process (GIA-induced J˙2) of -(6.0-6.5) × 10-11 yr-1 provides two permissible viscosity solutions for the lower mantle, (7-20) × 1021 and (5-9) × 1022 Pa s, and the analyses with observational constraints of the J˙2 and Last Glacial Maximum (LGM) sea levels at Barbados and Bonaparte Gulf indicate (5-9) × 1022 Pa s for the lower mantle. However, the analyses for the J˙2 based on the two-layer lower-mantle viscosity model only require a viscosity layer higher than (5-10) × 1021 Pa s for a depth above the core-mantle boundary (CMB), in which the value of (5-10) × 1021 Pa s corresponds to the solution of (7-20) × 1021 Pa s for the simple three-layer one. Moreover, the analyses with the J˙2 and LGM sea level constraints for the two-layer lower-mantle viscosity model indicate two viscosity solutions: η670,1191 > 3 × 1021 and η1191,2891 ˜ (5-10) × 1022 Pa s, and η670,1691 > 1022 and η1691,2891 ˜ (5-10) × 1022 Pa s. The inferred upper-mantle viscosity for such solutions is (1-4) × 1020 Pa s similar to the estimate for the simple three-layer viscosity model. That is, these analyses require a high viscosity layer of (5-10) × 1022 Pa s at least in the deep mantle, and suggest that the GIA-based lower-mantle viscosity structure should be treated carefully in discussing the mantle dynamics related to the viscosity jump at ˜670 km depth. We also preliminarily put additional constraints on these viscosity solutions by examining typical relative sea level (RSL) changes used to infer the lower-mantle viscosity. The viscosity solution inferred from the far-field RSL changes in the Australian region is consistent with those for the J˙2 and LGM sea levels, and the analyses for RSL changes at Southport and Bermuda in the intermediate region for the North American ice sheets suggest the solution of η670,D > 1022, ηD,2891 ˜ (5-10) × 1022 Pa s (D = 1191 or 1691 km) and upper-mantle viscosity higher than 6 × 1020 Pa s.

  20. Phase space effects on fast ion distribution function modeling in tokamaks

    NASA Astrophysics Data System (ADS)

    Podestà, M.; Gorelenkova, M.; Fredrickson, E. D.; Gorelenkov, N. N.; White, R. B.

    2016-05-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  1. Phase space effects on fast ion distribution function modeling in tokamaks

    DOE Data Explorer

    White, R. B. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Podesta, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkova, M. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Fredrickson, E. D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Gorelenkov, N. N. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)

    2016-06-01

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions. The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.

  2. A simple algorithm for sequentially incorporating gravity observations in seismic traveltime tomography

    USGS Publications Warehouse

    Parsons, T.; Blakely, R.J.; Brocher, T.M.

    2001-01-01

    The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.

  3. A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)

    NASA Astrophysics Data System (ADS)

    Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.

    2014-05-01

    Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/

  4. The role of strength defects in shaping impact crater planforms

    NASA Astrophysics Data System (ADS)

    Watters, W. A.; Geiger, L. M.; Fendrock, M.; Gibson, R.; Hundal, C. B.

    2017-04-01

    High-resolution imagery and digital elevation models (DEMs) were used to measure the planimetric shapes of well-preserved impact craters. These measurements were used to characterize the size-dependent scaling of the departure from circular symmetry, which provides useful insights into the processes of crater growth and modification. For example, we characterized the dependence of the standard deviation of radius (σR) on crater diameter (D) as σR ∼ Dm. For complex craters on the Moon and Mars, m ranges from 0.9 to 1.2 among strong and weak target materials. For the martian simple craters in our data set, m varies from 0.5 to 0.8. The value of m tends toward larger values in weak materials and modified craters, and toward smaller values in relatively unmodified craters as well as craters in high-strength targets, such as young lava plains. We hypothesize that m ≈ 1 for planforms shaped by modification processes (slumping and collapse), whereas m tends toward ∼ 1/2 for planforms shaped by an excavation flow that was influenced by strength anisotropies. Additional morphometric parameters were computed to characterize the following planform properties: the planform aspect ratio or ellipticity, the deviation from a fitted ellipse, and the deviation from a convex shape. We also measured the distribution of crater shapes using Fourier decomposition of the planform, finding a similar distribution for simple and complex craters. By comparing the strength of small and large circular harmonics, we confirmed that lunar and martian complex craters are more polygonal at small sizes. Finally, we have used physical and geometrical principles to motivate scaling arguments and simple Monte Carlo models for generating synthetic planforms, which depend on a characteristic length scale of target strength defects. One of these models can be used to generate populations of synthetic planforms which are very similar to the measured population of well-preserved simple craters on Mars.

  5. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  6. Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting

    PubMed Central

    Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M

    2014-01-01

    Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind “noise,” which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical “downscaling” of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Key Points Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations PMID:26213518

  7. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  8. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  9. pyhector: A Python interface for the simple climate model Hector

    DOE PAGES

    Willner, Sven N.; Hartin, Corinne; Gieseke, Robert

    2017-04-01

    Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less

  10. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  11. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data

    PubMed Central

    Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062

  12. ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.

    PubMed

    Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis

    2016-01-01

    After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.

  13. Identification and simulation evaluation of an AH-64 helicopter hover math model

    NASA Technical Reports Server (NTRS)

    Schroeder, J. A.; Watson, D. C.; Tischler, M. B.; Eshow, M. M.

    1991-01-01

    Frequency-domain parameter-identification techniques were used to develop a hover mathematical model of the AH-64 Apache helicopter from flight data. The unstable AH-64 bare-airframe characteristics without a stability-augmentation system were parameterized in the convectional stability-derivative form. To improve the model's vertical response, a simple transfer-function model approximating the effects of dynamic inflow was developed. Additional subcomponents of the vehicle were also modeled and simulated, such as a basic engine response for hover and the vehicle stick dynamic characteristics. The model, with and without stability augmentation, was then evaluated by AH-64 pilots in a moving-base simulation. It was the opinion of the pilots that the simulation was a satisfactory representation of the aircraft for the tasks of interest. The principal negative comment was that height control was more difficult in the simulation than in the aircraft.

  14. The design, analysis, and testing of a low-budget wind-tunnel flutter model with active aerodynamic controls

    NASA Technical Reports Server (NTRS)

    Bolding, R. M.; Stearman, R. O.

    1976-01-01

    A low budget flutter model incorporating active aerodynamic controls for flutter suppression studies was designed as both an educational and research tool to study the interfering lifting surface flutter phenomenon in the form of a swept wing-tail configuration. A flutter suppression mechanism was demonstrated on a simple semirigid three-degree-of-freedom flutter model of this configuration employing an active stabilator control, and was then verified analytically using a doublet lattice lifting surface code and the model's measured mass, mode shapes, and frequencies in a flutter analysis. Preliminary studies were significantly encouraging to extend the analysis to the larger degree of freedom AFFDL wing-tail flutter model where additional analytical flutter suppression studies indicated significant gains in flutter margins could be achieved. The analytical and experimental design of a flutter suppression system for the AFFDL model is presented along with the results of a preliminary passive flutter test.

  15. Improving Secondary Organic Aerosol (SOA) Models using Global Sensitivity Analysis and by Comparison to Chamber Data.

    NASA Astrophysics Data System (ADS)

    Miller, D. O.; Brune, W. H.

    2017-12-01

    Accurate estimates of secondary organic aerosol (SOA) from atmospheric models is a major research challenge due to the complexity of the chemical and physical processes involved in the SOA formation and continuous aging. The primary uncertainties of SOA models include those associated with the formation of gas-phase products, the conversion between gas phase and particle phase, the aging mechanisms of SOA, and other processes related to the heterogeneous and particle-phase reactions. To address this challenge, we us a modular modeling framework that combines both simple and near-explicit gas-phase reactions and a two-dimensional volatility basis set (2D-VBS) to simulate the formation and evolution of SOA. Global sensitivity analysis is used to assess the relative importance of the model input parameters. In addition, the model is compared to the measurements from the Focused Isoprene eXperiment at the California Institute of Technology (FIXCIT).

  16. Modeling Age-Related Differences in Immediate Memory Using SIMPLE

    ERIC Educational Resources Information Center

    Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.

    2006-01-01

    In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…

  17. Predicting Fish Densities in Lotic Systems: a Simple Modeling Approach

    EPA Science Inventory

    Fish density models are essential tools for fish ecologists and fisheries managers. However, applying these models can be difficult because of high levels of model complexity and the large number of parameters that must be estimated. We designed a simple fish density model and te...

  18. Simple shear of deformable square objects

    NASA Astrophysics Data System (ADS)

    Treagus, Susan H.; Lan, Labao

    2003-12-01

    Finite element models of square objects in a contrasting matrix in simple shear show that the objects deform to a variety of shapes. For a range of viscosity contrasts, we catalogue the changing shapes and orientations of objects in progressive simple shear. At moderate simple shear ( γ=1.5), the shapes are virtually indistinguishable from those in equivalent pure shear models with the same bulk strain ( RS=4), examined in a previous study. In theory, differences would be expected, especially for very stiff objects or at very large strain. In all our simple shear models, relatively competent square objects become asymmetric barrel shapes with concave shortened edges, similar to some types of boudin. Incompetent objects develop shapes surprisingly similar to mica fish described in mylonites.

  19. Simulation of upwind maneuvering of a sailing yacht

    NASA Astrophysics Data System (ADS)

    Harris, Daniel Hartrick

    A time domain maneuvering simulation of an IACC class yacht suitable for the analysis of unsteady upwind sailing including tacking is presented. The simulation considers motions in six degrees of freedom. The hydrodynamic and aerodynamic loads are calculated primarily with unsteady potential theory supplemented by empirical viscous models. The hydrodynamic model includes the effects of incident waves. Control of the rudder is provided by a simple rate feedback autopilot which is augmented with open loop additions to mimic human steering. The hydrodynamic models are based on the superposition of force components. These components fall into two groups, those which the yacht will experience in calm water, and those due to incident waves. The calm water loads are further divided into zero Froude number, or "double body" maneuvering loads, hydrostatic loads, gravitational loads, free surface radiation loads, and viscous/residual loads. The maneuvering loads are calculated with an unsteady panel code which treats the instantaneous geometry of the yacht below the undisturbed free surface. The free surface radiation loads are calculated via convolution of impulse response functions derived from seakeeping strip theory. The viscous/residual loads are based upon empirical estimates. The aerodynamic model consists primarily of a database of steady state sail coefficients. These coefficients treat the individual contributions to the total sail force of a number of chordwise strips on both the main and jib. Dynamic effects are modeled by using the instantaneous incident wind velocity and direction as the independent variables for the sail load contribution of each strip. The sail coefficient database was calculated numerically with potential methods and simple empirical viscous corrections. Additional aerodynamic load calculations are made to determine the parasitic contributions of the rig and hull. Validation studies compare the steady sailing hydro and aerodynamic loads, seaway induced motions, added resistance in waves, and tacking performance with trials data and other sources. Reasonable agreement is found in all cases.

  20. Simple yet effective: Historical proximity variables improve the species distribution models for invasive giant hogweed (Heracleum mantegazzianum s.l.) in Poland

    PubMed Central

    Jarzyna, Ingeborga; Obidziński, Artur; Tokarska-Guzik, Barbara; Sotek, Zofia; Pabjanek, Piotr; Pytlarczyk, Adam; Sachajdakiewicz, Izabela

    2017-01-01

    Species distribution models are scarcely applicable to invasive species because of their breaking of the models’ assumptions. So far, few mechanistic, semi-mechanistic or statistical solutions like dispersal constraints or propagule limitation have been applied. We evaluated a novel quasi-semi-mechanistic approach for regional scale models, using historical proximity variables (HPV) representing a state of the population in a given moment in the past. Our aim was to test the effects of addition of HPV sets of different minimal recentness, information capacity and the total number of variables on the quality of the species distribution model for Heracleum mantegazzianum on 116000 km2 in Poland. As environmental predictors, we used fragments of 103 1×1 km, world- wide, free-access rasters from WorldGrids.org. Single and ensemble models were computed using BIOMOD2 package 3.1.47 working in R environment 3.1.0. The addition of HPV improved the quality of single and ensemble models from poor to good and excellent. The quality was the highest for the variants with HPVs based on the distance from the most recent past occurrences. It was mostly affected by the algorithm type, but all HPV traits (minimal recentness, information capacity, model type or the number of the time periods) were significantly important determinants. The addition of HPVs improved the quality of current projections, raising the occurrence probability in regions where the species had occurred before. We conclude that HPV addition enables semi-realistic estimation of the rate of spread and can be applied to the short-term forecasting of invasive or declining species, which also break equal-dispersal probability assumptions. PMID:28926580

  1. Autonomous frequency domain identification: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.

    1989-01-01

    The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.

  2. Salt-induced aggregation and fusion of dioctadecyldimethylammonium chloride and sodium dihexadecylphosphate vesicles.

    PubMed Central

    Carmona-Ribeiro, A M; Chaimovich, H

    1986-01-01

    Small dioctadecyldimethylammonium chloride (DODAC) vesicles prepared by sonication fuse upon addition of NaCl as detected by several methods (electron microscopy, trapped volume determinations, temperature-dependent phase transition curves, and osmometer behavior. In contrast, small sodium dihexadecyl phosphate (DHP) vesicles mainly aggregate upon NaCl addition as shown by electron microscopy and the lack of osmometer behavior. Scatter-derived absorbance changes of small and large DODAC or DHP vesicles as a function of time after salt addition were obtained for a range of NaCl or amphiphile concentration. These changes were interpreted in accordance with a phenomenological model based upon fundamental light-scattering laws and simple geometrical considerations. Short-range hydration repulsion between DODAC (or DHP) vesicles is possibly the main energy barrier for the fusion process. Images FIGURE 2 FIGURE 9 PMID:3779002

  3. Health belief model and reasoned action theory in predicting water saving behaviors in yazd, iran.

    PubMed

    Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi

    2012-01-01

    People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors.

  4. Rethinking Use of the OML Model in Electric Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  5. Health Belief Model and Reasoned Action Theory in Predicting Water Saving Behaviors in Yazd, Iran

    PubMed Central

    Morowatisharifabad, Mohammad Ali; Momayyezi, Mahdieh; Ghaneian, Mohammad Taghi

    2012-01-01

    Background: People's behaviors and intentions about healthy behaviors depend on their beliefs, values, and knowledge about the issue. Various models of health education are used in deter¬mining predictors of different healthy behaviors but their efficacy in cultural behaviors, such as water saving behaviors, are not studied. The study was conducted to explain water saving beha¬viors in Yazd, Iran on the basis of Health Belief Model and Reasoned Action Theory. Methods: The cross-sectional study used random cluster sampling to recruit 200 heads of households to collect the data. The survey questionnaire was tested for its content validity and reliability. Analysis of data included descriptive statistics, simple correlation, hierarchical multiple regression. Results: Simple correlations between water saving behaviors and Reasoned Action Theory and Health Belief Model constructs were statistically significant. Health Belief Model and Reasoned Action Theory constructs explained 20.80% and 8.40% of the variances in water saving beha-viors, respectively. Perceived barriers were the strongest Predictor. Additionally, there was a sta¬tistically positive correlation between water saving behaviors and intention. Conclusion: In designing interventions aimed at water waste prevention, barriers of water saving behaviors should be addressed first, followed by people's attitude towards water saving. Health Belief Model constructs, with the exception of perceived severity and benefits, is more powerful than is Reasoned Action Theory in predicting water saving behavior and may be used as a framework for educational interventions aimed at improving water saving behaviors. PMID:24688927

  6. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed

    Chen, D G; Pounds, J G

    1998-12-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium.

  7. A nonlinear isobologram model with Box-Cox transformation to both sides for chemical mixtures.

    PubMed Central

    Chen, D G; Pounds, J G

    1998-01-01

    The linear logistical isobologram is a commonly used and powerful graphical and statistical tool for analyzing the combined effects of simple chemical mixtures. In this paper a nonlinear isobologram model is proposed to analyze the joint action of chemical mixtures for quantitative dose-response relationships. This nonlinear isobologram model incorporates two additional new parameters, Ymin and Ymax, to facilitate analysis of response data that are not constrained between 0 and 1, where parameters Ymin and Ymax represent the minimal and the maximal observed toxic response. This nonlinear isobologram model for binary mixtures can be expressed as [formula: see text] In addition, a Box-Cox transformation to both sides is introduced to improve the goodness of fit and to provide a more robust model for achieving homogeneity and normality of the residuals. Finally, a confidence band is proposed for selected isobols, e.g., the median effective dose, to facilitate graphical and statistical analysis of the isobologram. The versatility of this approach is demonstrated using published data describing the toxicity of the binary mixtures of citrinin and ochratoxin as well as a new experimental data from our laboratory for mixtures of mercury and cadmium. PMID:9860894

  8. Han's model parameters for microalgae grown under intermittent illumination: Determined using particle swarm optimization.

    PubMed

    Pozzobon, Victor; Perre, Patrick

    2018-01-21

    This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Deleterious Mutations, Apparent Stabilizing Selection and the Maintenance of Quantitative Variation

    PubMed Central

    Kondrashov, A. S.; Turelli, M.

    1992-01-01

    Apparent stabilizing selection on a quantitative trait that is not causally connected to fitness can result from the pleiotropic effects of unconditionally deleterious mutations, because as N. Barton noted, ``... individuals with extreme values of the trait will tend to carry more deleterious alleles ....'' We use a simple model to investigate the dependence of this apparent selection on the genomic deleterious mutation rate, U; the equilibrium distribution of K, the number of deleterious mutations per genome; and the parameters describing directional selection against deleterious mutations. Unlike previous analyses, we allow for epistatic selection against deleterious alleles. For various selection functions and realistic parameter values, the distribution of K, the distribution of breeding values for a pleiotropically affected trait, and the apparent stabilizing selection function are all nearly Gaussian. The additive genetic variance for the quantitative trait is kQa(2), where k is the average number of deleterious mutations per genome, Q is the proportion of deleterious mutations that affect the trait, and a(2) is the variance of pleiotropic effects for individual mutations that do affect the trait. In contrast, when the trait is measured in units of its additive standard deviation, the apparent fitness function is essentially independent of Q and a(2); and β, the intensity of selection, measured as the ratio of additive genetic variance to the ``variance'' of the fitness curve, is very close to s = U/k, the selection coefficient against individual deleterious mutations at equilibrium. Therefore, this model predicts appreciable apparent stabilizing selection if s exceeds about 0.03, which is consistent with various data. However, the model also predicts that β must equal V(m)/V(G), the ratio of new additive variance for the trait introduced each generation by mutation to the standing additive variance. Most, although not all, estimates of this ratio imply apparent stabilizing selection weaker than generally observed. A qualitative argument suggests that even when direct selection is responsible for most of the selection observed on a character, it may be essentially irrelevant to the maintenance of variation for the character by mutation-selection balance. Simple experiments can indicate the fraction of observed stabilizing selection attributable to the pleiotropic effects of deleterious mutations. PMID:1427047

  10. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  11. Fun with maths: exploring implications of mathematical models for malaria eradication.

    PubMed

    Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A

    2014-12-11

    Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.

  12. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  13. Corresponding-states behavior of an ionic model fluid with variable dispersion interactions

    NASA Astrophysics Data System (ADS)

    Weiss, Volker C.

    2016-06-01

    Guggenheim's corresponding-states approach for simple fluids leads to a remarkably universal representation of their thermophysical properties. For more complex fluids, such as polar or ionic ones, deviations from this type of behavior are to be expected, thereby supplying us with valuable information about the thermodynamic consequences of the interaction details in fluids. Here, the gradual transition of a simple fluid to an ionic one is studied by varying the relative strength of the dispersion interactions compared to the electrostatic interactions among the charged particles. In addition to the effects on the reduced surface tension that were reported earlier [F. Leroy and V. C. Weiss, J. Chem. Phys. 134, 094703 (2011)], we address the shape of the coexistence curve and focus on properties that are related to and derived from the vapor pressure. These quantities include the enthalpy and entropy of vaporization, the boiling point, and the critical compressibility factor Zc. For all of these properties, the crossover from simple to characteristically ionic fluid is seen once the dispersive attraction drops below 20%-40% of the electrostatic attraction (as measured for two particles at contact). Below this threshold, ionic fluids display characteristically low values of Zc as well as large Guggenheim and Guldberg ratios for the reduced enthalpy of vaporization and the reduced boiling point, respectively. The coexistence curves are wider and more skewed than those for simple fluids. The results for the ionic model fluid with variable dispersion interactions improve our understanding of the behavior of real ionic fluids, such as inorganic molten salts and room temperature ionic liquids, by gauging the importance of different types of interactions for thermodynamic properties.

  14. Symmetrical treatment of "Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition", for major depressive disorders.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2016-01-01

    We previously presented a group theoretical model that describes psychiatric patient states or clinical data in a graded vector-like format based on modulo groups. Meanwhile, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, the current version), is frequently used for diagnosis in daily psychiatric treatments and biological research. The diagnostic criteria of DSM-5 contain simple binominal items relating to the presence or absence of specific symptoms. In spite of its simple form, the practical structure of the DSM-5 system is not sufficiently systemized for data to be treated in a more rationally sophisticated way. To view the disease states in terms of symmetry in the manner of abstract algebra is considered important for the future systematization of clinical medicine. We provide a simple idea for the practical treatment of the psychiatric diagnosis/score of DSM-5 using depressive symptoms in line with our previously proposed method. An expression is given employing modulo-2 and -7 arithmetic (in particular, additive group theory) for Criterion A of a 'major depressive episode' that must be met for the diagnosis of 'major depressive disorder' in DSM-5. For this purpose, the novel concept of an imaginary value 0 that can be recognized as an explicit 0 or implicit 0 was introduced to compose the model. The zeros allow the incorporation or deletion of an item between any other symptoms if they are ordered appropriately. Optionally, a vector-like expression can be used to rate/select only specific items when modifying the criterion/scale. Simple examples are illustrated concretely. Further development of the proposed method for the criteria/scale of a disease is expected to raise the level of formalism of clinical medicine to that of other fields of natural science.

  15. Corresponding-states behavior of an ionic model fluid with variable dispersion interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Volker C., E-mail: volker.weiss@bccms.uni-bremen.de

    2016-06-21

    Guggenheim’s corresponding-states approach for simple fluids leads to a remarkably universal representation of their thermophysical properties. For more complex fluids, such as polar or ionic ones, deviations from this type of behavior are to be expected, thereby supplying us with valuable information about the thermodynamic consequences of the interaction details in fluids. Here, the gradual transition of a simple fluid to an ionic one is studied by varying the relative strength of the dispersion interactions compared to the electrostatic interactions among the charged particles. In addition to the effects on the reduced surface tension that were reported earlier [F. Leroymore » and V. C. Weiss, J. Chem. Phys. 134, 094703 (2011)], we address the shape of the coexistence curve and focus on properties that are related to and derived from the vapor pressure. These quantities include the enthalpy and entropy of vaporization, the boiling point, and the critical compressibility factor Z{sub c}. For all of these properties, the crossover from simple to characteristically ionic fluid is seen once the dispersive attraction drops below 20%–40% of the electrostatic attraction (as measured for two particles at contact). Below this threshold, ionic fluids display characteristically low values of Z{sub c} as well as large Guggenheim and Guldberg ratios for the reduced enthalpy of vaporization and the reduced boiling point, respectively. The coexistence curves are wider and more skewed than those for simple fluids. The results for the ionic model fluid with variable dispersion interactions improve our understanding of the behavior of real ionic fluids, such as inorganic molten salts and room temperature ionic liquids, by gauging the importance of different types of interactions for thermodynamic properties.« less

  16. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  17. Device reflectivity as a simple rule for predicting the suitability of scattering foils for improved OLED light extraction

    NASA Astrophysics Data System (ADS)

    Levell, Jack W.; Harkema, Stephan; Pendyala, Raghu K.; Rensing, Peter A.; Senes, Alessia; Bollen, Dirk; MacKerron, Duncan; Wilson, Joanne S.

    2013-09-01

    A general challenge in Organic Light Emitting Diodes (OLEDs) is to extract the light efficiently from waveguided modes within the device structure. This can be accomplished by applying an additional scattering layer to the substrate which results in outcoupling increases between 0% to <100% in external quantum efficiency. In this work, we aim to address this large variation and show that the reflectivity of the OLED is a simple and useful predictor of the efficiency of substrate scattering techniques without the need for detailed modeling. We show that by optimizing the cathode and anode structure of glass based OLEDs by using silver and an ITO free high conductive Agfa Orgacon™ PEDOT:PSS we are able to increase the external quantum efficiency of OLEDs with the same outcoupling substrates from 2.4% to 5.6%, an increase of 130%. In addition, Holst Centre and partners are developing flexible substrates with integrated light extraction features and roll to roll compatible processing techniques to enable this next step in OLED development both for lighting and display applications. These devices show promise as they are shatterproof substrates and facilitate low cost manufacture.

  18. A detailed comparison of optimality and simplicity in perceptual decision-making

    PubMed Central

    Shen, Shan; Ma, Wei Ji

    2017-01-01

    Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podestà, M., E-mail: mpodesta@pppl.gov; Gorelenkova, M.; Fredrickson, E. D.

    Integrated simulations of tokamak discharges typically rely on classical physics to model energetic particle (EP) dynamics. However, there are numerous cases in which energetic particles can suffer additional transport that is not classical in nature. Examples include transport by applied 3D magnetic perturbations and, more notably, by plasma instabilities. Focusing on the effects of instabilities, ad-hoc models can empirically reproduce increased transport, but the choice of transport coefficients is usually somehow arbitrary. New approaches based on physics-based reduced models are being developed to address those issues in a simplified way, while retaining a more correct treatment of resonant wave-particle interactions.more » The kick model implemented in the tokamak transport code TRANSP is an example of such reduced models. It includes modifications of the EP distribution by instabilities in real and velocity space, retaining correlations between transport in energy and space typical of resonant EP transport. The relevance of EP phase space modifications by instabilities is first discussed in terms of predicted fast ion distribution. Results are compared with those from a simple, ad-hoc diffusive model. It is then shown that the phase-space resolved model can also provide additional insight into important issues such as internal consistency of the simulations and mode stability through the analysis of the power exchanged between energetic particles and the instabilities.« less

  20. Two Simple Models for Fracking

    NASA Astrophysics Data System (ADS)

    Norris, Jaren Quinn

    Recent developments in fracking have enable the recovery of oil and gas from tight shale reservoirs. These developments have also made fracking one of the most controversial environmental issues in the United States. Despite the growing controversy surrounding fracking, there is relatively little publicly available research. This dissertation introduces two simple models for fracking that were developed using techniques from non-linear and statistical physics. The first model assumes that the volume of induced fractures must be equal to the volume of injected fluid. For simplicity, these fractures are assumed to form a spherically symmetric damage region around the borehole. The predicted volumes of water necessary to create a damage region with a given radius are in good agreement with reported values. The second model is a modification of invasion percolation which was previously introduced to model water flooding. The reservoir rock is represented by a regular lattice of local traps that contain oil and/or gas separated by rock barriers. The barriers are assumed to be highly heterogeneous and are assigned random strengths. Fluid is injected from a central site and the weakest rock barrier breaks allowing fluid to flow into the adjacent site. The process repeats with the weakest barrier breaking and fluid flowing to an adjacent site each time step. Extensive numerical simulations were carried out to obtain statistical properties of the growing fracture network. The network was found to be fractal with fractal dimensions differing slightly from the accepted values for traditional percolation. Additionally, the network follows Horton-Strahler and Tokunaga branching statistics which have been used to characterize river networks. As with other percolation models, the growth of the network occurs in bursts. These bursts follow a power-law size distribution similar to observed microseismic events. Reservoir stress anisotropy is incorporated into the model by assigning horizontal bonds weaker strengths on average than vertical bonds. Numerical simulations show that increasing bond strength anisotropy tends to reduce the fractal dimension of the growing fracture network, and decrease the power-law slope of the burst size distribution. Although simple, these two models are useful for making informed decisions about fracking.

Top