Sample records for simple two-level model

  1. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  2. Pulsed Rabi oscillations in quantum two-level systems: beyond the area theorem

    NASA Astrophysics Data System (ADS)

    Fischer, Kevin A.; Hanschke, Lukas; Kremser, Malte; Finley, Jonathan J.; Müller, Kai; Vučković, Jelena

    2018-01-01

    The area theorem states that when a short optical pulse drives a quantum two-level system, it undergoes Rabi oscillations in the probability of scattering a single photon. In this work, we investigate the breakdown of the area theorem as both the pulse length becomes non-negligible and for certain pulse areas. Using simple quantum trajectories, we provide an analytic approximation to the photon emission dynamics of a two-level system. Our model provides an intuitive way to understand re-excitation, which elucidates the mechanism behind the two-photon emission events that can spoil single-photon emission. We experimentally measure the emission statistics from a semiconductor quantum dot, acting as a two-level system, and show good agreement with our simple model for short pulses. Additionally, the model clearly explains our recent results (Fischer and Hanschke 2017 et al Nat. Phys.) showing dominant two-photon emission from a two-level system for pulses with interaction areas equal to an even multiple of π.

  3. Landau-Zener transitions and Dykhne formula in a simple continuum model

    NASA Astrophysics Data System (ADS)

    Dunham, Yujin; Garmon, Savannah

    The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.

  4. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  5. Modelling a Simple Mechanical System.

    ERIC Educational Resources Information Center

    Morland, Tim

    1999-01-01

    Provides an example of the modeling power of Mathematics, demonstrated in a piece of A-Level student coursework which was undertaken as part of the MEI Structured Mathematics scheme. A system of two masses and two springs oscillating in one dimension is found to be accurately modeled by a system of linear differential equations. (Author/ASK)

  6. Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    2010-01-01

    A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…

  7. Inference of mantle viscosity for depth resolutions of GIA observations

    NASA Astrophysics Data System (ADS)

    Nakada, Masao; Okuno, Jun'ichi

    2016-11-01

    Inference of the mantle viscosity from observations for glacial isostatic adjustment (GIA) process has usually been conducted through the analyses based on the simple three-layer viscosity model characterized by lithospheric thickness, upper- and lower-mantle viscosities. Here, we examine the viscosity structures for the simple three-layer viscosity model and also for the two-layer lower-mantle viscosity model defined by viscosities of η670,D (670-D km depth) and ηD,2891 (D-2891 km depth) with D-values of 1191, 1691 and 2191 km. The upper-mantle rheological parameters for the two-layer lower-mantle viscosity model are the same as those for the simple three-layer one. For the simple three-layer viscosity model, rate of change of degree-two zonal harmonics of geopotential due to GIA process (GIA-induced J˙2) of -(6.0-6.5) × 10-11 yr-1 provides two permissible viscosity solutions for the lower mantle, (7-20) × 1021 and (5-9) × 1022 Pa s, and the analyses with observational constraints of the J˙2 and Last Glacial Maximum (LGM) sea levels at Barbados and Bonaparte Gulf indicate (5-9) × 1022 Pa s for the lower mantle. However, the analyses for the J˙2 based on the two-layer lower-mantle viscosity model only require a viscosity layer higher than (5-10) × 1021 Pa s for a depth above the core-mantle boundary (CMB), in which the value of (5-10) × 1021 Pa s corresponds to the solution of (7-20) × 1021 Pa s for the simple three-layer one. Moreover, the analyses with the J˙2 and LGM sea level constraints for the two-layer lower-mantle viscosity model indicate two viscosity solutions: η670,1191 > 3 × 1021 and η1191,2891 ˜ (5-10) × 1022 Pa s, and η670,1691 > 1022 and η1691,2891 ˜ (5-10) × 1022 Pa s. The inferred upper-mantle viscosity for such solutions is (1-4) × 1020 Pa s similar to the estimate for the simple three-layer viscosity model. That is, these analyses require a high viscosity layer of (5-10) × 1022 Pa s at least in the deep mantle, and suggest that the GIA-based lower-mantle viscosity structure should be treated carefully in discussing the mantle dynamics related to the viscosity jump at ˜670 km depth. We also preliminarily put additional constraints on these viscosity solutions by examining typical relative sea level (RSL) changes used to infer the lower-mantle viscosity. The viscosity solution inferred from the far-field RSL changes in the Australian region is consistent with those for the J˙2 and LGM sea levels, and the analyses for RSL changes at Southport and Bermuda in the intermediate region for the North American ice sheets suggest the solution of η670,D > 1022, ηD,2891 ˜ (5-10) × 1022 Pa s (D = 1191 or 1691 km) and upper-mantle viscosity higher than 6 × 1020 Pa s.

  8. Mathematical model for steady state, simple ampholyte isoelectric focusing: Development, computer simulation and implementation

    NASA Technical Reports Server (NTRS)

    Palusinski, O. A.; Allgyer, T. T.

    1979-01-01

    The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.

  9. Mixing coarse-grained and fine-grained water in molecular dynamics simulations of a single system.

    PubMed

    Riniker, Sereina; van Gunsteren, Wilfred F

    2012-07-28

    The use of a supra-molecular coarse-grained (CG) model for liquid water as solvent in molecular dynamics simulations of biomolecules represented at the fine-grained (FG) atomic level of modelling may reduce the computational effort by one or two orders of magnitude. However, even if the pure FG model and the pure CG model represent the properties of the particular substance of interest rather well, their application in a hybrid FG/CG system containing varying ratios of FG versus CG particles is highly non-trivial, because it requires an appropriate balance between FG-FG, FG-CG, and CG-CG energies, and FG and CG entropies. Here, the properties of liquid water are used to calibrate the FG-CG interactions for the simple-point-charge water model at the FG level and a recently proposed supra-molecular water model at the CG level that represents five water molecules by one CG bead containing two interaction sites. Only two parameters are needed to reproduce different thermodynamic and dielectric properties of liquid water at physiological temperature and pressure for various mole fractions of CG water in FG water. The parametrisation strategy for the FG-CG interactions is simple and can be easily transferred to interactions between atomistic biomolecules and CG water.

  10. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  11. Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.

    PubMed

    Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J

    2015-06-01

    There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.

  12. Simple Spectral Lines Data Model Version 1.0

    NASA Astrophysics Data System (ADS)

    Osuna, Pedro; Salgado, Jesus; Guainazzi, Matteo; Dubernet, Marie-Lise; Roueff, Evelyne; Osuna, Pedro; Salgado, Jesus

    2010-12-01

    This document presents a Data Model to describe Spectral Line Transitions in the context of the Simple Line Access Protocol defined by the IVOA (c.f. Ref[13] IVOA Simple Line Access protocol) The main objective of the model is to integrate with and support the Simple Line Access Protocol, with which it forms a compact unit. This integration allows seamless access to Spectral Line Transitions available worldwide in the VO context. This model does not provide a complete description of Atomic and Molecular Physics, which scope is outside of this document. In the astrophysical sense, a line is considered as the result of a transition between two energy levels. Under the basis of this assumption, a whole set of objects and attributes have been derived to define properly the necessary information to describe lines appearing in astrophysical contexts. The document has been written taking into account available information from many different Line data providers (see acknowledgments section).

  13. One- and two-channel Kondo model with logarithmic Van Hove singularity: A numerical renormalization group solution

    NASA Astrophysics Data System (ADS)

    Zhuravlev, A. K.; Anokhin, A. O.; Irkhin, V. Yu.

    2018-02-01

    Simple scaling consideration and NRG solution of the one- and two-channel Kondo model in the presence of a logarithmic Van Hove singularity at the Fermi level is given. The temperature dependences of local and impurity magnetic susceptibility and impurity entropy are calculated. The low-temperature behavior of the impurity susceptibility and impurity entropy turns out to be non-universal in the Kondo sense and independent of the s-d coupling J. The resonant level model solution in the strong coupling regime confirms the NRG results. In the two-channel case the local susceptibility demonstrates a non-Fermi-liquid power-law behavior.

  14. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  15. Liquid-liquid critical point in a simple analytical model of water.

    PubMed

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  16. Liquid-liquid critical point in a simple analytical model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  17. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  19. Factor Analysis for Clustered Observations.

    ERIC Educational Resources Information Center

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  20. A Positive Stigma for Child Labor?

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Shafiq, M. Najeeb

    2008-01-01

    We introduce a simple empirical model that assumes a positive stigma (or norm) towards child labor that is common in some developing countries. We then illustrate our positive stigma model using data from Guatemala. Controlling for several child- and household-level characteristics, we use two instruments for measuring stigma: a child's indigenous…

  1. Effects of host social hierarchy on disease persistence.

    PubMed

    Davidson, Ross S; Marion, Glenn; Hutchings, Michael R

    2008-08-07

    The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.

  2. Generalized Tavis-Cummings models and quantum networks

    NASA Astrophysics Data System (ADS)

    Gorokhov, A. V.

    2018-04-01

    The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.

  3. Manual lateralization in macaques: handedness, target laterality and task complexity.

    PubMed

    Regaiolli, Barbara; Spiezio, Caterina; Vallortigara, Giorgio

    2016-01-01

    Non-human primates represent models to understand the evolution of handedness in humans. Despite several researches have been investigating non-human primates handedness, few studies examined the relationship between target position, hand preference and task complexity. This study aimed at investigating macaque handedness in relation to target laterality and tastiness, as well as task complexity. Seven pig-tailed macaques (Macaca nemestrina) were involved in three different "two alternative choice" tests: one low-level task and two high-level tasks (HLTs). During the first and the third tests macaques could select a preferred food and a non-preferred food, whereas by modifying the design of the second test, macaques were presented with no-difference alternative per trial. Furthermore, a simple-reaching test was administered to assess hand preference in a social context. Macaques showed hand preference at individual level both in simple and complex tasks, but not in the simple-reaching test. Moreover, target position seemed to affect hand preference in retrieving an object in the low-level task, but not in the HLT. Additionally, individual hand preference seemed to be affected from the tastiness of the item to be retrieved. The results suggest that both target laterality and individual motivation might influence hand preference of macaques, especially in simple tasks.

  4. A simple generative model of collective online behavior.

    PubMed

    Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix

    2014-07-22

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.

  5. A simple generative model of collective online behavior

    PubMed Central

    Gleeson, James P.; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A.; Reed-Tsochas, Felix

    2014-01-01

    Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates—even when using purely observational data without experimental design—that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470

  6. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  7. Maintenance of algal endosymbionts in Paramecium bursaria: a simple model based on population dynamics.

    PubMed

    Iwai, Sosuke; Fujiwara, Kenji; Tamura, Takuro

    2016-09-01

    Algal endosymbiosis is widely distributed in eukaryotes including many protists and metazoans, and plays important roles in aquatic ecosystems, combining phagotrophy and phototrophy. To maintain a stable symbiotic relationship, endosymbiont population size in the host must be properly regulated and maintained at a constant level; however, the mechanisms underlying the maintenance of algal endosymbionts are still largely unknown. Here we investigate the population dynamics of the unicellular ciliate Paramecium bursaria and its Chlorella-like algal endosymbiont under various experimental conditions in a simple culture system. Our results suggest that endosymbiont population size in P. bursaria was not regulated by active processes such as cell division coupling between the two organisms, or partitioning of the endosymbionts at host cell division. Regardless, endosymbiont population size was eventually adjusted to a nearly constant level once cells were grown with light and nutrients. To explain this apparent regulation of population size, we propose a simple mechanism based on the different growth properties (specifically the nutrient requirements) of the two organisms, and based from this develop a mathematical model to describe the population dynamics of host and endosymbiont. The proposed mechanism and model may provide a basis for understanding the maintenance of algal endosymbionts. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  8. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  9. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  10. A simple model of bipartite cooperation for ecological and organizational networks.

    PubMed

    Saavedra, Serguei; Reed-Tsochas, Felix; Uzzi, Brian

    2009-01-22

    In theoretical ecology, simple stochastic models that satisfy two basic conditions about the distribution of niche values and feeding ranges have proved successful in reproducing the overall structural properties of real food webs, using species richness and connectance as the only input parameters. Recently, more detailed models have incorporated higher levels of constraint in order to reproduce the actual links observed in real food webs. Here, building on previous stochastic models of consumer-resource interactions between species, we propose a highly parsimonious model that can reproduce the overall bipartite structure of cooperative partner-partner interactions, as exemplified by plant-animal mutualistic networks. Our stochastic model of bipartite cooperation uses simple specialization and interaction rules, and only requires three empirical input parameters. We test the bipartite cooperation model on ten large pollination data sets that have been compiled in the literature, and find that it successfully replicates the degree distribution, nestedness and modularity of the empirical networks. These properties are regarded as key to understanding cooperation in mutualistic networks. We also apply our model to an extensive data set of two classes of company engaged in joint production in the garment industry. Using the same metrics, we find that the network of manufacturer-contractor interactions exhibits similar structural patterns to plant-animal pollination networks. This surprising correspondence between ecological and organizational networks suggests that the simple rules of cooperation that generate bipartite networks may be generic, and could prove relevant in many different domains, ranging from biological systems to human society.

  11. A non-LTE model for the Jovian methane infrared emissions at high spectral resolution

    NASA Technical Reports Server (NTRS)

    Halthore, Rangasayi N.; Allen, J. E., Jr.; Decola, Philip L.

    1994-01-01

    High resolution spectra of Jupiter in the 3.3 micrometer region have so far failed to reveal either the continuum or the line emissions that can be unambiguously attributed to the nu(sub 3) band of methane (Drossart et al. 1993; Kim et al. 1991). Nu(sub 3) line intensities predicted with the help of two simple non-Local Thermodynamic Equilibrium (LTE) models -- a two-level model and a three-level model, using experimentally determined relaxation coefficients, are shown to be one to three orders of magnitude respectively below the 3-sigma noise level of these observations. Predicted nu(sub 4) emission intensities are consistent with observed values. If the methane mixing ratio below the homopause is assumed as 2 x 10(exp -3), a value of about 300 K is derived as an upper limit to the temperature of the high stratosphere at microbar levels.

  12. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  13. Electro-Optic Quantum Memory for Light Using Two-Level Atoms

    NASA Astrophysics Data System (ADS)

    Hétet, G.; Longdell, J. J.; Alexander, A. L.; Lam, P. K.; Sellars, M. J.

    2008-01-01

    We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.

  14. Electro-optic quantum memory for light using two-level atoms.

    PubMed

    Hétet, G; Longdell, J J; Alexander, A L; Lam, P K; Sellars, M J

    2008-01-18

    We present a simple quantum memory scheme that allows for the storage of a light field in an ensemble of two-level atoms. The technique is analogous to the NMR gradient echo for which the imprinting and recalling of the input field are performed by controlling a linearly varying broadening. Our protocol is perfectly efficient in the limit of high optical depths and the output pulse is emitted in the forward direction. We provide a numerical analysis of the protocol together with an experiment performed in a solid state system. In close agreement with our model, the experiment shows a total efficiency of up to 15%, and a recall efficiency of 26%. We suggest simple realizable improvements for the experiment to surpass the no-cloning limit.

  15. The Simple View of Reading as a Framework for National Literacy Initiatives: A Hierarchical Model of Pupil-Level and Classroom-Level Factors

    ERIC Educational Resources Information Center

    Savage, Robert; Burgos, Giovani; Wood, Eileen; Piquette, Noella

    2015-01-01

    The Simple View of Reading (SVR) describes Reading Comprehension as the product of distinct child-level variance in decoding (D) and linguistic comprehension (LC) component abilities. When used as a model for educational policy, distinct classroom-level influences of each of the components of the SVR model have been assumed, but have not yet been…

  16. A simple geometrical model describing shapes of soap films suspended on two rings

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  17. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  18. BRICK v0.2, a simple, accessible, and transparent model framework for climate and regional sea-level projections

    NASA Astrophysics Data System (ADS)

    Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus

    2017-07-01

    Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.

  19. Sea-level rise and shoreline retreat: time to abandon the Bruun Rule

    NASA Astrophysics Data System (ADS)

    Cooper, J. Andrew G.; Pilkey, Orrin H.

    2004-11-01

    In the face of a global rise in sea level, understanding the response of the shoreline to changes in sea level is a critical scientific goal to inform policy makers and managers. A body of scientific information exists that illustrates both the complexity of the linkages between sea-level rise and shoreline response, and the comparative lack of understanding of these linkages. In spite of the lack of understanding, many appraisals have been undertaken that employ a concept known as the "Bruun Rule". This is a simple two-dimensional model of shoreline response to rising sea level. The model has seen near global application since its original formulation in 1954. The concept provided an advance in understanding of the coastal system at the time of its first publication. It has, however, been superseded by numerous subsequent findings and is now invalid. Several assumptions behind the Bruun Rule are known to be false and nowhere has the Bruun Rule been adequately proven; on the contrary several studies disprove it in the field. No universally applicable model of shoreline retreat under sea-level rise has yet been developed. Despite this, the Bruun Rule is in widespread contemporary use at a global scale both as a management tool and as a scientific concept. The persistence of this concept beyond its original assumption base is attributed to the following factors: Appeal of a simple, easy to use analytical model that is in widespread use. Difficulty of determining the relative validity of 'proofs' and 'disproofs'. Ease of application. Positive advocacy by some scientists. Application by other scientists without critical appraisal. The simple numerical expression of the model. Lack of easy alternatives. The Bruun Rule has no power for predicting shoreline behaviour under rising sea level and should be abandoned. It is a concept whose time has passed. The belief by policy makers that it offers a prediction of future shoreline position may well have stifled much-needed research into the coastal response to sea-level rise.

  20. Analysis of a dynamic model of guard cell signaling reveals the stability of signal propagation

    NASA Astrophysics Data System (ADS)

    Gan, Xiao; Albert, RéKa

    Analyzing the long-term behaviors (attractors) of dynamic models of biological systems can provide valuable insight into biological phenotypes and their stability. We identified the long-term behaviors of a multi-level, 70-node discrete dynamic model of the stomatal opening process in plants. We reduce the model's huge state space by reducing unregulated nodes and simple mediator nodes, and by simplifying the regulatory functions of selected nodes while keeping the model consistent with experimental observations. We perform attractor analysis on the resulting 32-node reduced model by two methods: 1. converting it into a Boolean model, then applying two attractor-finding algorithms; 2. theoretical analysis of the regulatory functions. We conclude that all nodes except two in the reduced model have a single attractor; and only two nodes can admit oscillations. The multistability or oscillations do not affect the stomatal opening level in any situation. This conclusion applies to the original model as well in all the biologically meaningful cases. We further demonstrate the robustness of signal propagation by showing that a large percentage of single-node knockouts does not affect the stomatal opening level. Thus, we conclude that the complex structure of this signal transduction network provides multiple information propagation pathways while not allowing extensive multistability or oscillations, resulting in robust signal propagation. Our innovative combination of methods offers a promising way to analyze multi-level models.

  1. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  2. Size Effect of Ground Patterns on FM-Band Cross-Talks between Two Parallel Signal Traces of Printed Circuit Boards for Vehicles

    NASA Astrophysics Data System (ADS)

    Iida, Michihira; Maeno, Tsuyoshi; Wang, Jianqing; Fujiwara, Osamu

    Electromagnetic disturbances in vehicle-mounted radios are mainly caused by conducted noise currents flowing through wiring-harnesses from vehicle-mounted printed circuit boards (PCBs) with common slitting ground patterns. To suppress these kinds of noise currents, we previously measured them for simple two-layer PCBs with two parallel signal traces and slitting or non-slitting ground patterns, and then investigated by the FDTD simulation the reduction characteristics of the FM-band cross-talk noise levels between two parallel signal traces on six simple PCB models having different slitting ground or different divided ground patterns parallel to the traces. As a result, we found that the contributory factor for the FM-band cross-talk reduction is the reduction of mutual inductance between the two parallel traces, and also the noise currents from PCBs can rather be suppressed even if the size of the return ground becomes small. In this study, to investigate this finding, we further simulated the frequency characteristics of cross-talk reduction for additional six simple PCB models with different dividing dimensions ground patterns parallel to the traces, which revealed an interesting phenomenon that cross-talk reduction characteristics do not always decrease with increasing the width between the divided ground patterns.

  3. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  4. Spatiotemporal modelling of viral infection dynamics

    NASA Astrophysics Data System (ADS)

    Beauchemin, Catherine

    Viral kinetics have been studied extensively in the past through the use of ordinary differential equations describing the time evolution of the diseased state in a spatially well-mixed medium. However, emerging spatial structures such as localized populations of dead cells might affect the spread of infection, similar to the manner in which a counter-fire can stop a forest fire from spreading. In the first phase of the project, a simple two-dimensional cellular automaton model of viral infections was developed. It was validated against clinical immunological data for uncomplicated influenza A infections and shown to be accurate enough to adequately model them. In the second phase of the project, the simple two-dimensional cellular automaton model was used to investigate the effects of relaxing the well-mixed assumption on viral infection dynamics. It was shown that grouping the initially infected cells into patches rather than distributing them uniformly on the grid reduced the infection rate as only cells on the perimeter of the patch have healthy neighbours to infect. Use of a local epithelial cell regeneration rule where dead cells are replaced by healthy cells when an immediate neighbour divides was found to result in more extensive damage of the epithelium and yielded a better fit to experimental influenza A infection data than a global regeneration rule based on division rate of healthy cell. Finally, the addition of immune cell at the site of infection was found to be a better strategy at low infection levels, while addition at random locations on the grid was the better strategy at high infection level. In the last project, the movement of T cells within lymph nodes in the absence of antigen, was investigated. Based on individual T cell track data captured by two-photon microscopy experiments in vivo, a simple model was proposed for the motion of T cells. This is the first step towards the implementation of a more realistic spatiotemporal model of HIV than those proposed thus far.

  5. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  6. The complex links between governance and biodiversity.

    PubMed

    Barrett, Christopher B; Gibson, Clark C; Hoffman, Barak; McCubbins, Mathew D

    2006-10-01

    We argue that two problems weaken the claims of those who link corruption and the exploitation of natural resources. The first is conceptual and the second is methodological. Studies that use national-level indicators of corruption fail to note that corruption comes in many forms, at multiple levels, that may affect resource use quite differently: negatively, positively, or not at all. Without a clear causal model of the mechanism by which corruption affects resources, one should treat with caution any estimated relationship between corruption and the state of natural resources. Simple, atheoretical models linking corruption measures and natural resource use typically do not account for other important control variables pivotal to the relationship between humans and natural resources. By way of illustration of these two general concerns, we used statistical methods to demonstrate that the findings of a recent, well-known study that posits a link between corruption and decreases in forests and elephants are not robust to simple conceptual and methodological refinements. In particular, once we controlled for a few plausible anthropogenic and biophysical conditioning factors, estimated the effects in changes rather than levels so as not to confound cross-sectional and longitudinal variation, and incorporated additional observations from the same data sources, corruption levels no longer had any explanatory power.

  7. A univariate model of river water nitrate time series

    NASA Astrophysics Data System (ADS)

    Worrall, F.; Burt, T. P.

    1999-01-01

    Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.

  8. Why do things fall? How to explain why gravity is not a force

    NASA Astrophysics Data System (ADS)

    Stannard, Warren B.

    2018-03-01

    In most high school physics classes, gravity is described as an attractive force between two masses as formulated by Newton over 300 years ago. Einstein’s general theory of relativity implies that gravitational effects are instead the result of a ‘curvature’ of space-time. However, explaining why things fall without resorting to Newton’s gravitational force can be difficult. This paper introduces some simple graphical and visual analogies and models which are suitable for the introduction of Einstein’s theory of general relativity at a high school level. These models provide an alternative to Newton’s gravitational force and help answer the simple question: why do things fall?

  9. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  10. On the successful use of a simplified model to simulate the succession of toxic cyanobacteria in a hypereutrophic reservoir with a highly fluctuating water level.

    PubMed

    Fadel, Ali; Lemaire, Bruno J; Vinçon-Leite, Brigitte; Atoui, Ali; Slim, Kamal; Tassin, Bruno

    2017-09-01

    Many freshwater bodies worldwide that suffer from harmful algal blooms would benefit for their management from a simple ecological model that requires few field data, e.g. for early warning systems. Beyond a certain degree, adding processes to ecological models can reduce model predictive capabilities. In this work, we assess whether a simple ecological model without nutrients is able to describe the succession of cyanobacterial blooms of different species in a hypereutrophic reservoir and help understand the factors that determine these blooms. In our study site, Karaoun Reservoir, Lebanon, cyanobacteria Aphanizomenon ovalisporum and Microcystis aeruginosa alternatively bloom. A simple configuration of the model DYRESM-CAEDYM was used; both cyanobacteria were simulated, with constant vertical migration velocity for A. ovalisporum, with vertical migration velocity dependent on light for M. aeruginosa and with growth limited by light and temperature and not by nutrients for both species. The model was calibrated on two successive years with contrasted bloom patterns and high variations in water level. It was able to reproduce the measurements; it showed a good performance for the water level (root-mean-square error (RMSE) lower than 1 m, annual variation of 25 m), water temperature profiles (RMSE of 0.22-1.41 °C, range 13-28 °C) and cyanobacteria biomass (RMSE of 1-57 μg Chl a L -1 , range 0-206 μg Chl a L -1 ). The model also helped understand the succession of blooms in both years. The model results suggest that the higher growth rate of M. aeruginosa during favourable temperature and light conditions allowed it to outgrow A. ovalisporum. Our results show that simple model configurations can be sufficient not only for theoretical works when few major processes can be identified but also for operational applications. This approach could be transposed on other hypereutrophic lakes and reservoirs to describe the competition between dominant phytoplankton species, contribute to early warning systems or be used for management scenarios.

  11. Modeling the plant-soil interaction in presence of heavy metal pollution and acidity variations.

    PubMed

    Guala, Sebastián; Vega, Flora A; Covelo, Emma F

    2013-01-01

    On a mathematical interaction model, developed to model metal uptake by plants and the effects on their growth, we introduce a modification which considers also effects on variations of acidity in soil. The model relates the dynamics of the uptake of metals from soil to plants and also variations of uptake according to the acidity level. Two types of relationships are considered: total and available metal content. We suppose simple mathematical assumptions in order to get as simple as possible expressions with the aim of being easily tested in experimental problems. This work introduces modifications to two versions of the model: on the one hand, the expression of the relationship between the metal in soil and the concentration of the metal in plants and, on the other hand, the relationship between the metal in the soil and total amount of the metal in plants. The fine difference of both versions is fundamental at the moment to consider the tolerance and capacity of accumulation of pollutants in the biomass from the soil.

  12. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  13. Predicting Fish Densities in Lotic Systems: a Simple Modeling Approach

    EPA Science Inventory

    Fish density models are essential tools for fish ecologists and fisheries managers. However, applying these models can be difficult because of high levels of model complexity and the large number of parameters that must be estimated. We designed a simple fish density model and te...

  14. Two Simple Macroeconomic Simulations and the Great Depression. Instructor's Notes [and] A Student Guide [and] Basic Program.

    ERIC Educational Resources Information Center

    Schenk, Robert E.

    Intended for use with college students in introductory macroeconomics or American economic history courses, these two computer simulations of two basic macroeconomic models--a simple Keynesian-type model and a quantity-theory-of-money model--present largely incompatible explanations of the Great Depression. Written in Basic, the simulations are…

  15. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng Jing; Zhou Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple (V)-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise tomore » non-negligible errors compared with the precise results.« less

  16. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    NASA Astrophysics Data System (ADS)

    Cheng, Jing; Zhou, Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple V-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise to non-negligible errors compared with the precise results.

  17. Principles of protein folding--a perspective from simple exact models.

    PubMed Central

    Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.

    1995-01-01

    General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459

  18. Simple View of Reading in Down's syndrome: the role of listening comprehension and reading skills.

    PubMed

    Roch, Maja; Levorato, M Chiara

    2009-01-01

    According to the 'Simple View of Reading' (Hoover and Gough 1990), individual differences in reading comprehension are accounted for by decoding skills and listening comprehension, each of which makes a unique and specific contribution. The current research was aimed at testing the Simple View of Reading in individuals with Down's syndrome and comparing their profiles with typically developing first graders. Listening comprehension and the ability to read both words and non-words was compared in two groups with the same level of reading comprehension: 23 individuals with Down's syndrome aged between 11 years 3 months and 18 years 2 months and 23 first-grade typically developing children aged between 6 years 2 months and 7 years 4 months. The results indicate that at the same level of reading comprehension, individuals with Down's syndrome have less developed listening comprehension and more advanced word recognition than typically developing first graders. A comparison of the profiles of the two groups revealed that reading comprehension level was predicted by listening comprehension in both groups of participants and by word-reading skills only in typically developing children. The Simple View of Reading model is confirmed for individuals with Down's syndrome, although they do not show the reading profile of typically developing first graders; rather, they show an atypical profile similar to that of 'poor comprehenders' (Cain and Oakhill 2006). The crucial role of listening comprehension in Down's syndrome is also discussed with reference to the educational implications.

  19. A study of two cases of comma-cloud cyclogenesis using a semigeostrophic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, G.C.; Cho, Hanru

    1992-12-01

    The linear stability of two atmospheric flows is studied, with basic-state data taken from environments where comma clouds are observed to flow. Each basic state features a baroclinic zone associated with an upper-level jet, with conditional instability on the north side. The semigeostrophic approximation is utilized, along with a simple parameterization for cumulus heating, and the eigenvalue problem is solved employing a Chebyshev spectral technique. 47 refs.

  20. Calibration of a simple and a complex model of global marine biogeochemistry

    NASA Astrophysics Data System (ADS)

    Kriest, Iris

    2017-11-01

    The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.

  1. The impact of potential political security level on international tourism

    Treesearch

    Young-Rae Kim; Chang Huh; Seung Hyun Kim

    2002-01-01

    The purpose of this study was to investigate the impact of potential political security in an effort to fill in two foregoing research gaps in international tourism. To investigate the relationship between political security and international tourism, a simple regression model was employed. Secondary data were collected from a variety of sources, such as international...

  2. Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments

    NASA Astrophysics Data System (ADS)

    Switzer, George F.; Proctor, Fred H.

    2002-01-01

    This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.

  3. Local density approximation in site-occupation embedding theory

    NASA Astrophysics Data System (ADS)

    Senjean, Bruno; Tsuchiizu, Masahisa; Robert, Vincent; Fromager, Emmanuel

    2017-01-01

    Site-occupation embedding theory (SOET) is a density functional theory (DFT)-based method which aims at modelling strongly correlated electrons. It is in principle exact and applicable to model and quantum chemical Hamiltonians. The theory is presented here for the Hubbard Hamiltonian. In contrast to conventional DFT approaches, the site (or orbital) occupations are deduced in SOET from a partially interacting system consisting of one (or more) impurity site(s) and non-interacting bath sites. The correlation energy of the bath is then treated implicitly by means of a site-occupation functional. In this work, we propose a simple impurity-occupation functional approximation based on the two-level (2L) Hubbard model which is referred to as two-level impurity local density approximation (2L-ILDA). Results obtained on a prototypical uniform eight-site Hubbard ring are promising. The extension of the method to larger systems and more sophisticated model Hamiltonians is currently in progress.

  4. Generalized estimators of avian abundance from count survey data

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  5. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  6. Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition

    NASA Astrophysics Data System (ADS)

    McGilvray, M.; Dann, A. G.; Jacobs, P. A.

    2013-07-01

    Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.

  7. Modeling the effect of subgrain rotation recrystallization on the evolution of olivine crystal preferred orientations in simple shear

    NASA Astrophysics Data System (ADS)

    Signorelli, Javier; Tommasi, Andréa

    2015-11-01

    Homogenization models are widely used to predict the evolution of texture (crystal preferred orientations) and resulting anisotropy of physical properties in metals, rocks, and ice. They fail, however, in predicting two main features of texture evolution in simple shear (the dominant deformation regime on Earth) for highly anisotropic crystals, like olivine: (1) the fast rotation of the CPO towards a stable position characterized by parallelism of the dominant slip system and the macroscopic shear and (2) the asymptotical evolution towards a constant intensity. To better predict CPO-induced anisotropy in the mantle, but limiting computational costs and use of poorly-constrained physical parameters, we modified a viscoplastic self-consistent code to simulate the effects of subgrain rotation recrystallization. To each crystal is associated a finite number of fragments (possible subgrains). Formation of a subgrain corresponds to introduction of a disorientation (relative to the parent) and resetting of the fragment strain and internal energy. The probability of formation of a subgrain is controlled by comparison between the local internal energy and the average value in the polycrystal. A two-level mechanical interaction scheme is applied for simulating the intracrystalline strain heterogeneity allowed by the formation of low-angle grain boundaries. Within a crystal, interactions between subgrains follow a constant stress scheme. The interactions between grains are simulated by a tangent viscoplastic self-consistent approach. This two-level approach better reproduces the evolution of olivine CPO in simple shear in experiments and nature. It also predicts a marked weakening at low shear strains, consistently with experimental data.

  8. CrowdWater - Can people observe what models need?

    NASA Astrophysics Data System (ADS)

    van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.

    2017-12-01

    CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.

  9. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  10. Analysis of a novel double-barreled anion channel from rat liver rough endoplasmic reticulum.

    PubMed Central

    Morier, N; Sauvé, R

    1994-01-01

    The presence of anionic channels in stripped rough endoplasmic reticulum membranes isolated from rat hepatocytes was investigated by fusing microsomes from these membranes to a planar lipid bilayer. Several types of anion-selective channels were observed including a voltage-gated Cl- channel, the activity of which appeared in bursts characterized by transitions among three distinct conductance levels of 0 pS (0 level), 160 pS (O1 level), and 320 pS (O2 level), respectively, in 450 mM (cis) 50 mM (trans) KCl conditions. A chi 2 analysis on current records where interburst silent periods were omitted showed that the relative probability of current levels 0 (baseline), O1, and O2 followed a binomial statistic. However, measurements of the conditional probabilities W(level 0 at tau/level O2 at 0) and W(level O2 at tau/level 0 at 0) provided clear evidence of direct transitions between the current levels 0 and O2 without any detectable transitions to the intermediate level O1. It was concluded on the basis of these results that the observed channel was controlled by at least two distinct gating processes, namely 1) a voltage-dependent activation mechanism in which the entire system behaves as two independent monomeric channels of 160 pS with each channel characterized by a simple Open-Closed kinetic, and 2) a slow voltage-dependent process that accounts for both the appearance of silent periods between bursts of channel activity and the transitions between the current levels 0 and O2. Finally, an analysis of the relative probability for the system to be in levels 0, O1, and O2 showed that our results are more compatible with a model in which all the states resulting from the superposition of the two independent monomeric channels have access at different rates to a common inactivated state than with a model where a simple Open-Closed main gate either occludes or exposes simultaneously two independent 160-pS monomers. Images FIGURE 2 FIGURE 6 PMID:7524709

  11. The time course of corticospinal excitability during a simple reaction time task.

    PubMed

    Kennefick, Michael; Maslovat, Dana; Carlsen, Anthony N

    2014-01-01

    The production of movement in a simple reaction time task can be separated into two time periods: the foreperiod, which is thought to include preparatory processes, and the reaction time interval, which includes initiation processes. To better understand these processes, transcranial magnetic stimulation has been used to probe corticospinal excitability at various time points during response preparation and initiation. Previous research has shown that excitability decreases prior to the "go" stimulus and increases following the "go"; however these two time frames have been examined independently. The purpose of this study was to measure changes in CE during both the foreperiod and reaction time interval in a single experiment, relative to a resting baseline level. Participants performed a button press movement in a simple reaction time task and excitability was measured during rest, the foreperiod, and the reaction time interval. Results indicated that during the foreperiod, excitability levels quickly increased from baseline with the presentation of the warning signal, followed by a period of stable excitability leading up to the "go" signal, and finally a rapid increase in excitability during the reaction time interval. This excitability time course is consistent with neural activation models that describe movement preparation and response initiation.

  12. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis

    PubMed Central

    Collins, Anne G. E.; Frank, Michael J.

    2012-01-01

    Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033

  13. Convective Propagation Characteristics Using a Simple Representation of Convective Organization

    NASA Astrophysics Data System (ADS)

    Neale, R. B.; Mapes, B. E.

    2016-12-01

    Observed equatorial wave propagation is intimately linked to convective organization and it's coupling to features of the larger-scale flow. In this talk we a use simple 4 level model to accommodate vertical modes of a mass flux convection scheme (shallow, mid-level and deep). Two paradigms of convection are used to represent convective processes. One that has only both random (unorganized) diagnosed fluctuations of convective properties and one with organized fluctuations of convective properties that are amplified by previously existing convection and has an explicit moistening impact on the local convecting environment We show a series of model simulations in single-column, 2D and 3D configurations, where the role of convective organization in wave propagation is shown to be fundamental. For the optimal choice of parameters linking organization to local atmospheric state, a broad array of convective wave propagation emerges. Interestingly the key characteristics of propagating modes are the low-level moistening followed by deep convection followed by mature 'large-scale' heating. This organization structure appears to hold firm across timescales from 5-day wave disturbances to MJO-like wave propagation.

  14. Quantum Theories of Self-Localization

    NASA Astrophysics Data System (ADS)

    Bernstein, Lisa Joan

    In the classical dynamics of coupled oscillator systems, nonlinearity leads to the existence of stable solutions in which energy remains localized for all time. Here the quantum-mechanical counterpart of classical self-localization is investigated in the context of two model systems. For these quantum models, the terms corresponding to classical nonlinearities modify a subset of the stationary quantum states to be particularly suited to the creation of nonstationary wavepackets that localize energy for long times. The first model considered here is the Quantized Discrete Self-Trapping model (QDST), a system of anharmonic oscillators with linear dispersive coupling used to model local modes of vibration in polyatomic molecules. A simple formula is derived for a particular symmetry class of QDST systems which gives an analytic connection between quantum self-localization and classical local modes. This formula is also shown to be useful in the interpretation of the vibrational spectra of some molecules. The second model studied is the Frohlich/Einstein Dimer (FED), a two-site system of anharmonically coupled oscillators based on the Frohlich Hamiltonian and motivated by the theory of Davydov solitons in biological protein. The Born-Oppenheimer perturbation method is used to obtain approximate stationary state wavefunctions with error estimates for the FED at the first excited level. A second approach is used to reduce the first excited level FED eigenvalue problem to a system of ordinary differential equations. A simple theory of low-energy self-localization in the FED is discussed. The quantum theories of self-localization in the intrinsic QDST model and the extrinsic FED model are compared.

  15. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Simple Model to Describe the Relationship among Rainfall, Groundwater and Land Subsidence under a Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Zheng, Y. Y.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2017-12-01

    Land subsidence is a very serious problem of Zhuoshui River alluvial fan, Taiwan. The main reason of land subsidence is a compression of soil, but the compression measured in the wide area is very extensive (Maryam et al., 2013; Linlin et al., 2014). Chen et al. [2010] studied the linear relationship between groundwater level and subsurface altitude variations from Global Positioning System (GPS) station in Zhuoshui River alluvial fan. But the subsurface altitude data were only from two GPS stations. Their distributions are spared and small, not enough to express the altitude variations of Zhuoshui River alluvial fan. Hung et al. [2011] used Interferometry Synthetic Aperture Radar (InSAR) to measure the surface subsidence in Zhuoshui River alluvial fan, but haven't compared with groundwater level. The study compares the correlation between rainfall events and groundwater level and compares the correlation between groundwater level and subsurface altitude, these two correlation affected by heterogeneous soil. From these relationships, a numerical model is built to simulate the land subsidence variations and estimate the coefficient of aquifer soil compressibility. Finally, the model can estimate the long-term land subsidence. Keywords: Land Subsidence, InSAR, Groundwater Level, Numerical Model, Correlation Analyses

  17. MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.

    PubMed

    Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R

    2009-09-01

    Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.

  18. Simple Estimators for the Simple Latent Class Mastery Testing Model. Twente Educational Memorandum No. 19.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…

  19. Phase transitions in the q -voter model with noise on a duplex clique

    NASA Astrophysics Data System (ADS)

    Chmiel, Anna; Sznajd-Weron, Katarzyna

    2015-11-01

    We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.

  20. Problems of low-parameter equations of state

    NASA Astrophysics Data System (ADS)

    Petrik, G. G.

    2017-11-01

    The paper focuses on the system approach to problems of low-parametric equations of state (EOS). It is a continuation of the investigations in the field of substantiated prognosis of properties on two levels, molecular and thermodynamic. Two sets of low-parameter EOS have been considered based on two very simple molecular-level models. The first one consists of EOS of van der Waals type (a modification of van der Waals EOS proposed for spheres). The main problem of these EOS is a weak connection with the micro-level, which raise many uncertainties. The second group of EOS has been derived by the author independently of the ideas of van der Waals based on the model of interacting point centers (IPC). All the parameters of the EOS have a meaning and are associated with the manifestation of attractive and repulsive forces. The relationship between them is found to be the control parameter of the thermodynamic level. In this case, EOS IPC passes into a one-parameter family. It is shown that many EOS of vdW-type can be included in the framework of the PC model. Simultaneously, all their parameters acquire a physical meaning.

  1. A simple model for strong ground motions and response spectra

    USGS Publications Warehouse

    Safak, Erdal; Mueller, Charles; Boatwright, John

    1988-01-01

    A simple model for the description of strong ground motions is introduced. The model shows that response spectra can be estimated by using only four parameters of the ground motion, the RMS acceleration, effective duration and two corner frequencies that characterize the effective frequency band of the motion. The model is windowed band-limited white noise, and is developed by studying the properties of two functions, cumulative squared acceleration in the time domain, and cumulative squared amplitude spectrum in the frequency domain. Applying the methods of random vibration theory, the model leads to a simple analytical expression for the response spectra. The accuracy of the model is checked by using the ground motion recordings from the aftershock sequences of two different earthquakes and simulated accelerograms. The results show that the model gives a satisfactory estimate of the response spectra.

  2. Learning in Structured Connectionist Networks

    DTIC Science & Technology

    1988-04-01

    the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive

  3. A Simple Mechanical Model for the Isotropic Harmonic Oscillator

    ERIC Educational Resources Information Center

    Nita, Gelu M.

    2010-01-01

    A constrained elastic pendulum is proposed as a simple mechanical model for the isotropic harmonic oscillator. The conceptual and mathematical simplicity of this model recommends it as an effective pedagogical tool in teaching basic physics concepts at advanced high school and introductory undergraduate course levels. (Contains 2 figures.)

  4. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  5. Modeling and experimental characterization of electromigration in interconnect trees

    NASA Astrophysics Data System (ADS)

    Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.

    1999-11-01

    Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.

  6. Nondeducibility-Based Analysis of Cyber-Physical Systems

    NASA Astrophysics Data System (ADS)

    Gamage, Thoshitha; McMillin, Bruce

    Controlling information flow in a cyber-physical system (CPS) is challenging because cyber domain decisions and actions manifest themselves as visible changes in the physical domain. This paper presents a nondeducibility-based observability analysis for CPSs. In many CPSs, the capacity of a low-level (LL) observer to deduce high-level (HL) actions ranges from limited to none. However, a collaborative set of observers strategically located in a network may be able to deduce all the HL actions. This paper models a distributed power electronics control device network using a simple DC circuit in order to understand the effect of multiple observers in a CPS. The analysis reveals that the number of observers required to deduce all the HL actions in a system increases linearly with the number of configurable units. A simple definition of nondeducibility based on the uniqueness of low-level projections is also presented. This definition is used to show that a system with two security domain levels could be considered “nondeducibility secure” if no unique LL projections exist.

  7. Inhomogeneity and velocity fields effects on scattering polarization in solar prominences

    NASA Astrophysics Data System (ADS)

    Milić, I.; Faurobert, M.

    2015-10-01

    One of the methods for diagnosing vector magnetic fields in solar prominences is the so called "inversion" of observed polarized spectral lines. This inversion usually assumes a fairly simple generative model and in this contribution we aim to study the possible systematic errors that are introduced by this assumption. On two-dimensional toy model of a prominence, we first demonstrate importance of multidimensional radiative transfer and horizontal inhomogeneities. These are able to induce a significant level of polarization in Stokes U, without the need for the magnetic field. We then compute emergent Stokes spectrum from a prominence which is pervaded by the vector magnetic field and use a simple, one-dimensional model to interpret these synthetic observations. We find that inferred values for the magnetic field vector generally differ from the original ones. Most importantly, the magnetic field might seem more inclined than it really is.

  8. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  9. Simplified aeroelastic modeling of horizontal axis wind turbines

    NASA Technical Reports Server (NTRS)

    Wendell, J. H.

    1982-01-01

    Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.

  10. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  11. A minimally sufficient model for rib proximal-distal patterning based on genetic analysis and agent-based simulations

    PubMed Central

    Mah, In Kyoung

    2017-01-01

    For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314

  12. On Defect Cluster Aggregation and Non-Reducibilty in Tin-Doped Indium Oxide

    NASA Astrophysics Data System (ADS)

    Warschkow, Oliver; Ellis, Donald E.; Gonzalez, Gabriela; Mason, Thomas O.

    2003-03-01

    The conductivity of tin-doped indium oxide (ITO), a transparent conductor, is critically dependent on the amount of tin-doping and oxygen partial pressure during preparation and annealing. Frank and Kostlin (Appl. Phys. A 27 (1982) 197-206) rationalized the carrier concentration dependence by postulating the formation of two types of neutral defect clusters at medium tin-doping levels: "Reducible" and "non-reducible" defect clusters; so named to indicate their ability to create carriers under reduction. According to Frank and Kostlin, both are composed of a single oxygen interstitial and two tin atoms substituting for indium, positioned in non-nearest and nearest coordination, respectively. This present work, seeking to distinguish reducible and non-reducible clusters by use of an atomistic model, finds only a weak correlation of oxygen interstitial binding energies with the relative positioning of dopants. Instead, the number of tin-dopants in the vicinity of the interstitial has a much larger effect on how strongly it is bound, a simple consequence of Coulomb interactions. We postulate that oxygen interstitials become non-reducible when clustered with three or more Sn_In. This occurs at higher doping levels as reducible clusters aggregate and share tin atoms. A simple probabilistic model, estimating the average number of clusters so aggregated, provides a qualitatively correct description of the carrier density in reduced ITO as a function of Sn doping level.

  13. Comprehensive solutions to the Bloch equations and dynamical models for open two-level systems

    NASA Astrophysics Data System (ADS)

    Skinner, Thomas E.

    2018-01-01

    The Bloch equation and its variants constitute the fundamental dynamical model for arbitrary two-level systems. Many important processes, including those in more complicated systems, can be modeled and understood through the two-level approximation. It is therefore of widespread relevance, especially as it relates to understanding dissipative processes in current cutting-edge applications of quantum mechanics. Although the Bloch equation has been the subject of considerable analysis in the 70 years since its inception, there is still, perhaps surprisingly, significant work that can be done. This paper extends the scope of previous analyses. It provides a framework for more fully understanding the dynamics of dissipative two-level systems. A solution is derived that is compact, tractable, and completely general, in contrast to previous results. Any solution of the Bloch equation depends on three roots of a cubic polynomial that are crucial to the time dependence of the system. The roots are typically only sketched out qualitatively, with no indication of their dependence on the physical parameters of the problem. Degenerate roots, which modify the solutions, have been ignored altogether. Here the roots are obtained explicitly in terms of a single real-valued root that is expressed as a simple function of the system parameters. For the conventional Bloch equation, a simple graphical representation of this root is presented that makes evident the explicit time dependence of the system for each point in the parameter space. Several intuitive, visual models of system dynamics are developed. A Euclidean coordinate system is identified in which any generalized Bloch equation is separable, i.e., the sum of commuting rotation and relaxation operators. The time evolution in this frame is simply a rotation followed by relaxation at modified rates that play a role similar to the standard longitudinal and transverse rates. These rates are functions of the applied field, which provides information towards control of the dissipative process. The Bloch equation also describes a system of three coupled harmonic oscillators, providing additional perspective on dissipative systems.

  14. A 360° Vision for Virtual Organizations Characterization and Modelling: Two Intentional Level Aspects

    NASA Astrophysics Data System (ADS)

    Priego-Roche, Luz-María; Rieu, Dominique; Front, Agnès

    Nowadays, organizations aiming to be successful in an increasingly competitive market tend to group together into virtual organizations. Designing the information system (IS) of such virtual organizations on the basis of the IS of those participating is a real challenge. The IS of a virtual organization plays an important role in the collaboration and cooperation of the participants organizations and in reaching the common goal. This article proposes criteria allowing virtual organizations to be identified and classified at an intentional level, as well as the information necessary for designing the organizations’ IS. Instantiation of criteria for a specific virtual organization and its participants, will allow simple graphical models to be generated in a modelling tool. The models will be used as bases for the IS design at organizational and operational levels. The approach is illustrated by the example of the virtual organization UGRT (a regional stockbreeders union in Tabasco, Mexico).

  15. Accurate calculation and modeling of the adiabatic connection in density functional theory

    NASA Astrophysics Data System (ADS)

    Teale, A. M.; Coriani, S.; Helgaker, T.

    2010-04-01

    Using a recently implemented technique for the calculation of the adiabatic connection (AC) of density functional theory (DFT) based on Lieb maximization with respect to the external potential, the AC is studied for atoms and molecules containing up to ten electrons: the helium isoelectronic series, the hydrogen molecule, the beryllium isoelectronic series, the neon atom, and the water molecule. The calculation of AC curves by Lieb maximization at various levels of electronic-structure theory is discussed. For each system, the AC curve is calculated using Hartree-Fock (HF) theory, second-order Møller-Plesset (MP2) theory, coupled-cluster singles-and-doubles (CCSD) theory, and coupled-cluster singles-doubles-perturbative-triples [CCSD(T)] theory, expanding the molecular orbitals and the effective external potential in large Gaussian basis sets. The HF AC curve includes a small correlation-energy contribution in the context of DFT, arising from orbital relaxation as the electron-electron interaction is switched on under the constraint that the wave function is always a single determinant. The MP2 and CCSD AC curves recover the bulk of the dynamical correlation energy and their shapes can be understood in terms of a simple energy model constructed from a consideration of the doubles-energy expression at different interaction strengths. Differentiation of this energy expression with respect to the interaction strength leads to a simple two-parameter doubles model (AC-D) for the AC integrand (and hence the correlation energy of DFT) as a function of the interaction strength. The structure of the triples-energy contribution is considered in a similar fashion, leading to a quadratic model for the triples correction to the AC curve (AC-T). From a consideration of the structure of a two-level configuration-interaction (CI) energy expression of the hydrogen molecule, a simple two-parameter CI model (AC-CI) is proposed to account for the effects of static correlation on the AC. When parametrized in terms of the same input data, the AC-CI model offers improved performance over the corresponding AC-D model, which is shown to be the lowest-order contribution to the AC-CI model. The utility of the accurately calculated AC curves for the analysis of standard density functionals is demonstrated for the BLYP exchange-correlation functional and the interaction-strength-interpolation (ISI) model AC integrand. From the results of this analysis, we investigate the performance of our proposed two-parameter AC-D and AC-CI models when a simple density functional for the AC at infinite interaction strength is employed in place of information at the fully interacting point. The resulting two-parameter correlation functionals offer a qualitatively correct behavior of the AC integrand with much improved accuracy over previous attempts. The AC integrands in the present work are recommended as a basis for further work, generating functionals that avoid spurious error cancellations between exchange and correlation energies and give good accuracy for the range of densities and types of correlation contained in the systems studied here.

  16. On measuring community participation in research.

    PubMed

    Khodyakov, Dmitry; Stockdale, Susan; Jones, Andrea; Mango, Joseph; Jones, Felica; Lizaola, Elizabeth

    2013-06-01

    Active participation of community partners in research aspects of community-academic partnered projects is often assumed to have a positive impact on the outcomes of such projects. The value of community engagement in research, however, cannot be empirically determined without good measures of the level of community participation in research activities. Based on our recent evaluation of community-academic partnered projects centered around behavioral health issues, this article uses semistructured interview and survey data to outline two complementary approaches to measuring the level of community participation in research-a "three-model" approach that differentiates between the levels of community participation and a Community Engagement in Research Index (CERI) that offers a multidimensional view of community engagement in the research process. The primary goal of this article is to present and compare these approaches, discuss their strengths and limitations, summarize the lessons learned, and offer directions for future research. We find that whereas the three-model approach is a simple measure of the perception of community participation in research activities, CERI allows for a more nuanced understanding by capturing multiple aspects of such participation. Although additional research is needed to validate these measures, our study makes a significant contribution by illustrating the complexity of measuring community participation in research and the lack of reliability in simple scores offered by the three-model approach.

  17. Testing the uniqueness of mass models using gravitational lensing

    NASA Astrophysics Data System (ADS)

    Walls, Levi; Williams, Liliya L. R.

    2018-06-01

    The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.

  18. Doubly self-consistent field theory of grafted polymers under simple shear in steady state.

    PubMed

    Suo, Tongchuan; Whitmore, Mark D

    2014-03-21

    We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkman equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.

  19. Rubber friction and tire dynamics.

    PubMed

    Persson, B N J

    2011-01-12

    We propose a simple rubber friction law, which can be used, for example, in models of tire (and vehicle) dynamics. The friction law is tested by comparing numerical results to the full rubber friction theory (Persson 2006 J. Phys.: Condens. Matter 18 7789). Good agreement is found between the two theories. We describe a two-dimensional (2D) tire model which combines the rubber friction model with a simple mass-spring description of the tire body. The tire model is very flexible and can be used to accurately calculate μ-slip curves (and the self-aligning torque) for braking and cornering or combined motion (e.g. braking during cornering). We present numerical results which illustrate the theory. Simulations of anti-blocking system (ABS) braking are performed using two simple control algorithms.

  20. A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean

    NASA Astrophysics Data System (ADS)

    Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid

    2017-10-01

    A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.

  1. A simple model to estimate the impact of sea-level rise on platform beaches

    NASA Astrophysics Data System (ADS)

    Taborda, Rui; Ribeiro, Mónica Afonso

    2015-04-01

    Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.

  2. A distributed Clips implementation: dClips

    NASA Technical Reports Server (NTRS)

    Li, Y. Philip

    1993-01-01

    A distributed version of the Clips language, dClips, was implemented on top of two existing generic distributed messaging systems to show that: (1) it is easy to create a coarse-grained parallel programming environment out of an existing language if a high level messaging system is used; and (2) the computing model of a parallel programming environment can be changed easily if we change the underlying messaging system. dClips processes were first connected with a simple master-slave model. A client-server model with intercommunicating agents was later implemented. The concept of service broker is being investigated.

  3. Distributed feature binding in the auditory modality: experimental evidence toward reconciliation of opposing views on the basis of mismatch negativity and behavioral measures.

    PubMed

    Chernyshev, Boris V; Bryzgalov, Dmitri V; Lazarev, Ivan E; Chernysheva, Elena G

    2016-08-03

    Current understanding of feature binding remains controversial. Studies involving mismatch negativity (MMN) measurement show a low level of binding, whereas behavioral experiments suggest a higher level. We examined the possibility that the two levels of feature binding coexist and may be shown within one experiment. The electroencephalogram was recorded while participants were engaged in an auditory two-alternative choice task, which was a combination of the oddball and the condensation tasks. Two types of deviant target stimuli were used - complex stimuli, which required feature conjunction to be identified, and simple stimuli, which differed from standard stimuli in a single feature. Two behavioral outcomes - correct responses and errors - were analyzed separately. Responses to complex stimuli were slower and less accurate than responses to simple stimuli. MMN was prominent and its amplitude was similar for both simple and complex stimuli, whereas the respective stimuli differed from standards in a single feature or two features respectively. Errors in response only to complex stimuli were associated with decreased MMN amplitude. P300 amplitude was greater for complex stimuli than for simple stimuli. Our data are compatible with the explanation that feature binding in auditory modality depends on two concurrent levels of processing. We speculate that the earlier level related to MMN generation is an essential and critical stage. Yet, a later analysis is also carried out, affecting P300 amplitude and response time. The current findings provide resolution to conflicting views on the nature of feature binding and show that feature binding is a distributed multilevel process.

  4. Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests

    EPA Science Inventory

    An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...

  5. The initial establishment and epithelial morphogenesis of the esophagus: a new model of tracheal–esophageal separation and transition of simple columnar into stratified squamous epithelium in the developing esophagus

    PubMed Central

    Que, Jianwen

    2016-01-01

    The esophagus and trachea are tubular organs that initially share a single common lumen in the anterior foregut. Several models have been proposed to explain how this single-lumen developmental intermediate generates two tubular organs. However, new evidence suggests that these models are not comprehensive. I will first briefly review these models and then propose a novel ‘splitting and extension’ model based on our in vitro modeling of the foregut separation process. Signaling molecules (e.g., SHHs, WNTs, BMPs) and transcription factors (e.g., NKX2.1 and SOX2) are critical for the separation of the foregut. Intriguingly, some of these molecules continue to play essential roles during the transition of simple columnar into stratified squamous epithelium in the developing esophagus, and they are also closely involved in epithelial maintenance in the adults. Alterations in the levels of these molecules have been associated with the initiation and progression of several esophageal diseases and cancer in adults. PMID:25727889

  6. A Simple Double-Source Model for Interference of Capillaries

    ERIC Educational Resources Information Center

    Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua

    2012-01-01

    A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aron-Dine, S.; Pomrehn, G. S.; Pribram-Jones, A.

    Two quaternary Heusler alloys, equiatomic CuNiMnAl and CuNiMnSn, are studied using density functional theory to understand their tendency for atomic disorder on the lattice and the magnetic effects of disorder. Disordered structures with antisite defects of atoms of the same and different sublattices are considered, with the level of atomic disorder ranging from 3% to 25%. Formation energies and magnetic moments are calculated relative to the ordered ground state and combined with a simple thermodynamical model to estimate temperature effects. We predict the relative levels of disordering in the two equiatomic alloys with good correlation to experimental x-ray diffraction results.more » In conclusion, the effect of swaps involving Mn is also discussed.« less

  8. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  9. Modeling molecular mechanisms in the axon

    NASA Astrophysics Data System (ADS)

    de Rooij, R.; Miller, K. E.; Kuhl, E.

    2017-03-01

    Axons are living systems that display highly dynamic changes in stiffness, viscosity, and internal stress. However, the mechanistic origin of these phenomenological properties remains elusive. Here we establish a computational mechanics model that interprets cellular-level characteristics as emergent properties from molecular-level events. We create an axon model of discrete microtubules, which are connected to neighboring microtubules via discrete crosslinking mechanisms that obey a set of simple rules. We explore two types of mechanisms: passive and active crosslinking. Our passive and active simulations suggest that the stiffness and viscosity of the axon increase linearly with the crosslink density, and that both are highly sensitive to the crosslink detachment and reattachment times. Our model explains how active crosslinking with dynein motors generates internal stresses and actively drives axon elongation. We anticipate that our model will allow us to probe a wide variety of molecular phenomena—both in isolation and in interaction—to explore emergent cellular-level features under physiological and pathological conditions.

  10. A Model for Assessing Reflective Practices in Pharmacy Education

    PubMed Central

    Bosnic-Anticevich, Sinthia; Lonie, John M.; Smith, Lorraine

    2015-01-01

    Objective. To research the literature and examine assessment strategies used in health education that measure reflection levels and to identify assessment strategies for use in pharmacy education. Methods. A simple systematic review using a 5-step approach was employed to locate peer-reviewed articles addressing assessment strategies in health education from the last 20 years. Results. The literature search identified assessment strategies and rubrics used in health education for assessing levels of reflection. There is a significant gap in the literature regarding reflective rubric use in pharmacy education. Conclusion. Two assessment strategies to assess levels of reflection, including a reflective rubric tailored for pharmacy education, are proposed. PMID:26690718

  11. National Freight Demand Modeling - Bridging the Gap between Freight Flow Statistics and U.S. Economic Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Shih-Miao; Hwang, Ho-Ling

    2007-01-01

    This paper describes a development of national freight demand models for 27 industry sectors covered by the 2002 Commodity Flow Survey. It postulates that the national freight demands are consistent with U.S. business patterns. Furthermore, the study hypothesizes that the flow of goods, which make up the national production processes of industries, is coherent with the information described in the 2002 Annual Input-Output Accounts developed by the Bureau of Economic Analysis. The model estimation framework hinges largely on the assumption that a relatively simple relationship exists between freight production/consumption and business patterns for each industry defined by the three-digit Northmore » American Industry Classification System industry codes (NAICS). The national freight demand model for each selected industry sector consists of two models; a freight generation model and a freight attraction model. Thus, a total of 54 simple regression models were estimated under this study. Preliminary results indicated promising freight generation and freight attraction models. Among all models, only four of them had a R2 value lower than 0.70. With additional modeling efforts, these freight demand models could be enhanced to allow transportation analysts to assess regional economic impacts associated with temporary lost of transportation services on U.S. transportation network infrastructures. Using such freight demand models and available U.S. business forecasts, future national freight demands could be forecasted within certain degrees of accuracy. These freight demand models could also enable transportation analysts to further disaggregate the CFS state-level origin-destination tables to county or zip code level.« less

  12. Electrical conductivity of metal powders under pressure

    NASA Astrophysics Data System (ADS)

    Montes, J. M.; Cuevas, F. G.; Cintas, J.; Urban, P.

    2011-12-01

    A model for calculating the electrical conductivity of a compressed powder mass consisting of oxide-coated metal particles has been derived. A theoretical tool previously developed by the authors, the so-called `equivalent simple cubic system', was used in the model deduction. This tool is based on relating the actual powder system to an equivalent one consisting of deforming spheres packed in a simple cubic lattice, which is much easier to examine. The proposed model relates the effective electrical conductivity of the powder mass under compression to its level of porosity. Other physically measurable parameters in the model are the conductivities of the metal and oxide constituting the powder particles, their radii, the mean thickness of the oxide layer and the tap porosity of the powder. Two additional parameters controlling the effect of the descaling of the particle oxide layer were empirically introduced. The proposed model was experimentally verified by measurements of the electrical conductivity of aluminium, bronze, iron, nickel and titanium powders under pressure. The consistency between theoretical predictions and experimental results was reasonably good in all cases.

  13. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    NASA Astrophysics Data System (ADS)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  14. System-level modeling of acetone-butanol-ethanol fermentation.

    PubMed

    Liao, Chen; Seo, Seung-Oh; Lu, Ting

    2016-05-01

    Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  16. Two-photon absorption resonance in 3-(1,1-dicyanoethenyl)-1-phenyl-4,5-dihydro-1H-pyrazole (DCNP)

    NASA Astrophysics Data System (ADS)

    Miniewicz, Andrzej; Delysse, Stéphane; Nunzi, Jean-Michel; Kajzar, François

    1998-04-01

    A two-photon absorption spectrum of 3-(1,1-dicyanoethenyl)-1-phenyl-4,5-dihydro-1H-pyrazole (DCNP) in tetrahydrofuran solution has been studied by the Kerr ellipsometry technique. The spectral shape and amplitude of the imaginary part of the dominant molecular hyperpolarizability term Im( γXXXX) is compared with the relevant linear absorption spectrum within a simple two-level model. Agreement between the measured γXXXX=2.0×10 -48 m 5 V -2 and calculated γXXXX=(1.2-1.5)×10 -48 m 5 V -2 two-photon absorption molecular hyperpolarizabilties in the vicinity of the two-photon resonance transition is satisfactory.

  17. Stability of procalcitonin at room temperature.

    PubMed

    Milcent, Karen; Poulalhon, Claire; Fellous, Christelle Vauloup; Petit, François; Bouyer, Jean; Gajdos, Vincent

    2014-01-01

    The aim was to assess procalcitonin (PCT) stability after two days of storage at room temperature. Samples were collected from febrile children aged 7 to 92 days and were rapidly frozen after sampling. PCT levels were measured twice after thawing: immediately (named y) and 48 hours later after storage at room temperature (named x). PCT values were described with medians and interquartile ranges or by categorizing them into classes with thresholds 0.25, 0.5, and 2 ng/mL. The relationship between x and y PCT levels was analyzed using fractional polynomials in order to predict the PCT value immediately after thawing (named y') from x. A significant decrease in PCT values was observed after 48 hours of storage at room temperature, either in median, 30% lowering (p < 0.001), or as categorical variable (p < 0.001). The relationship between x and y can be accurately modeled with a simple linear model: y = 1.37 x (R2 = 0.99). The median of the predicted PCT values y' was quantitatively very close to the median of y and the distributions of y and y' across categories were very similar and not statistically different. PCT levels noticeably decrease after 48 hours of storage at room temperature. It is possible to pre- dict accurately effective PCT values from the values after 48 hours of storage at room temperature with a simple statistical model.

  18. Audible sonar images generated with proprioception for target analysis.

    PubMed

    Kuc, Roman B

    2017-05-01

    Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.

  19. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  20. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  1. Doubly self-consistent field theory of grafted polymers under simple shear in steady state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suo, Tongchuan; Whitmore, Mark D., E-mail: mark-whitmore@umanitoba.ca

    2014-03-21

    We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkmanmore » equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.« less

  2. Architectural-level power estimation and experimentation

    NASA Astrophysics Data System (ADS)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.

  3. Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.

    PubMed

    Min, Kyung Duk; Cho, Sung Il

    2018-03-19

    The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.

  4. Energy spectrum inverse problem of q-deformed harmonic oscillator and entanglement of composite bosons

    NASA Astrophysics Data System (ADS)

    Sang, Nguyen Anh; Thu Thuy, Do Thi; Loan, Nguyen Thi Ha; Lan, Nguyen Tri; Viet, Nguyen Ai

    2017-06-01

    Using the simple deformed three-level model (D3L model) proposed in our early work, we study the entanglement problem of composite bosons. Consider three first energy levels are known, we can get two energy separations, and can define the level deformation parameter δ. Using connection between q-deformed harmonic oscillator and Morse-like anharmonic potential, the deform parameter q also can be derived explicitly. Like the Einstein’s theory of special relativity, we introduce the observer e˙ects: out side observer (looking from outside the studying system) and inside observer (looking inside the studying system). Corresponding to those observers, the outside entanglement entropy and inside entanglement entropy will be defined.. Like the case of Foucault pendulum in the problem of Earth rotation, our deformation energy level investigation might be useful in prediction the environment e˙ect outside a confined box.

  5. Predicting outcome in severe traumatic brain injury using a simple prognostic model.

    PubMed

    Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie

    2014-06-17

    Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.

  6. Theory and modelling of light-matter interactions in photonic crystal cavity systems coupled to quantum dot ensembles

    NASA Astrophysics Data System (ADS)

    Cartar, William K.

    Photonic crystal microcavity quantum dot lasers show promise as high quality-factor, low threshold lasers, that can be integrated on-chip, with tunable room temperature opera- tions. However, such semiconductor microcavity lasers are notoriously difficult to model in a self-consistent way and are primarily modelled by simplified rate equation approxima- tions, typically fit to experimental data, which limits investigations of their optimization and fundamental light-matter interaction processes. Moreover, simple cavity mode optical theory and rate equations have recently been shown to fail in explaining lasing threshold trends in triangular lattice photonic crystal cavities as a function of cavity size, and the potential impact of fabrication disorder is not well understood. In this thesis, we develop a simple but powerful numerical scheme for modelling the quantum dot active layer used for lasing in these photonic crystal cavity structures, as an ensemble of randomly posi- tioned artificial two-level atoms. Each two-level atom is defined by optical Bloch equations solved by a quantum master equation that includes phenomenological pure dephasing and an incoherent pump rate that effectively models a multi-level gain system. Light-matter in- teractions of both passive and lasing structures are analyzed using simulation defined tools and post-simulation Green function techniques. We implement an active layer ensemble of up to 24,000 statistically unique quantum dots in photonic crystal cavity simulations, using a self-consistent finite-difference time-domain method. This method has the distinct advantage of capturing effects such as dipole-dipole coupling and radiative decay, without the need for any phenomenological terms, since the time-domain solution self-consistently captures these effects. Our analysis demonstrates a powerful ability to connect with recent experimental trends, while remaining completely general in its set-up; for example, we do not invoke common approximations such as the rotating-wave or slowly-varying envelope approximations, and solve dynamics with zero a priori knowledge.

  7. Research on the properties and interactions of simple atomic and ionic systems

    NASA Technical Reports Server (NTRS)

    Novick, R.

    1972-01-01

    Simple ionic systems were studied, such as metastable autoionizing states of the negative He ion, two-photon decay spectrum of metastable He ion, optical excitation with low energy ions, and lifetime measurements of singly ionized Li and metastable He ion. Simple atomic systems were also investigated. Metastable autoionizing atomic energy levels in alkali elements were included, along with lifetime measurements of Cr-53, group 2A isotopes, and alkali metal atoms using level crossing and optical double resonance spectroscopy.

  8. Development of an algorithm to predict serum vitamin D levels using a simple questionnaire based on sunlight exposure.

    PubMed

    Vignali, Edda; Macchia, Enrico; Cetani, Filomena; Reggiardo, Giorgio; Cianferotti, Luisella; Saponaro, Federica; Marcocci, Claudio

    2017-01-01

    Sun exposure is the main determinant of vitamin D production. The aim of this study was to develop an algorithm to assess individual vitamin D status, independently of serum 25(OHD) measurement, using a simple questionnaire, mostly relying upon sunlight exposure, which might help select subjects requiring serum 25(OHD) measurement. Six hundred and twenty adult subjects living in a mountain village in Southern Italy, located at 954 m above the sea level and at a latitude of 40°50'11″76N, were asked to fill the questionnaire in two different periods of the year: August 2010 and March 2011. Seven predictors were considered: month of investigation, age, sex, BMI, average daily sunlight exposure, beach holidays in the past 12 months, and frequency of going outdoors. The statistical model assumes four classes of serum 25(OHD) concentrations: ≤10, 10-19.9, 20-29.9, and ≥30 ng/ml. The algorithm was developed using a two-step procedure. In Step 1, the linear regression equation was defined in 385 randomly selected subjects. In Step 2, the predictive ability of the regression model was tested in the remaining 235 subjects. Seasonality, daily sunlight exposure and beach holidays in the past 12 months accounted for 27.9, 13.5, and 6.4 % of the explained variance in predicting vitamin D status, respectively. The algorithm performed extremely well: 212 of 235 (90.2 %) subjects were assigned to the correct vitamin D status. In conclusion, our pilot study demonstrates that an algorithm to estimate the vitamin D status can be developed using a simple questionnaire based on sunlight exposure.

  9. Particle-tracking analysis of contributing areas of public-supply wells in simple and complex flow systems, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Barlow, Paul M.

    1997-01-01

    Steady-state, two- and three-dimensional, ground-water-flow models coupled with particle tracking were evaluated to determine their effectiveness in delineating contributing areas of wells pumping from stratified-drift aquifers of Cape Cod, Massachusetts. Several contributing areas delineated by use of the three-dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of two-dimensional analytical and numerical modeling techniques and included discontinuous areas of the water table.

  10. Estimation of a Nonlinear Intervention Phase Trajectory for Multiple-Baseline Design Data

    ERIC Educational Resources Information Center

    Hembry, Ian; Bunuan, Rommel; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2015-01-01

    A multilevel logistic model for estimating a nonlinear trajectory in a multiple-baseline design is introduced. The model is applied to data from a real multiple-baseline design study to demonstrate interpretation of relevant parameters. A simple change-in-levels (?"Levels") model and a model involving a quadratic function…

  11. Shape coexistence and the role of axial asymmetry in 72Ge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayangeakaa, A. D.; Janssens, R. F.; Wu, C. Y.

    2016-01-22

    The quadrupole collectivity of low-lying states and the anomalous behavior of the0 + 2 and 2 + 3 levels in 72Ge are investigated via projectile multi-step Coulomb excitation with GRETINA and CHICO-2. A total of forty six E2 and M1 matrix elements connecting fourteen low-lying levels were determined using the least-squares search code, GOSIA. Evidence for triaxiality and shape coexistence, based on the model-independent shape invariants deduced from the Kumar–Cline sum rule, is presented. Moreover, these are interpreted using a simple two-state mixing model as well as multi-state mixing calculations carried out within the framework of the triaxial rotor model.more » Our results represent a significant milestone towards the understanding of the unusual structure of this nucleus.« less

  12. Effects of capillarity and microtopography on wetland specific yield

    USGS Publications Warehouse

    Sumner, D.M.

    2007-01-01

    Hydrologic models aid in describing water flows and levels in wetlands. Frequently, these models use a specific yield conceptualization to relate water flows to water level changes. Traditionally, a simple conceptualization of specific yield is used, composed of two constant values for above- and below-surface water levels and neglecting the effects of soil capillarity and land surface microtopography. The effects of capiltarity and microtopography on specific yield were evaluated at three wetland sites in the Florida Everglades. The effect of capillarity on specific yield was incorporated based on the fillable pore space within a soil moisture profile at hydrostatic equilibrium with the water table. The effect of microtopography was based on areal averaging of topographically varying values of specific yield. The results indicate that a more physically-based conceptualization of specific yield incorporating capillary and microtopographic considerations can be substantially different from the traditional two-part conceptualization, and from simpler conceptualizations incorporating only capillarity or only microtopography. For the sites considered, traditional estimates of specific yield could under- or overestimate the more physically based estimates by a factor of two or more. The results suggest that consideration of both capillarity and microtopography is important to the formulation of specific yield in physically based hydrologic models of wetlands. ?? 2007, The Society of Wetland Scientists.

  13. Mapping an atlas of tissue-specific Drosophila melanogaster metabolomes by high resolution mass spectrometry.

    PubMed

    Chintapalli, Venkateswara R; Al Bratty, Mohammed; Korzekwa, Dominika; Watson, David G; Dow, Julian A T

    2013-01-01

    Metabolomics can provide exciting insights into organismal function, but most work on simple models has focussed on the whole organism metabolome, so missing the contributions of individual tissues. Comprehensive metabolite profiles for ten tissues from adult Drosophila melanogaster were obtained here by two chromatographic methods, a hydrophilic interaction (HILIC) method for polar metabolites and a lipid profiling method also based on HILIC, in combination with an Orbitrap Exactive instrument. Two hundred and forty two polar metabolites were putatively identified in the various tissues, and 251 lipids were observed in positive ion mode and 61 in negative ion mode. Although many metabolites were detected in all tissues, every tissue showed characteristically abundant metabolites which could be rationalised against specific tissue functions. For example, the cuticle contained high levels of glutathione, reflecting a role in oxidative defence; the alimentary canal (like vertebrate gut) had high levels of acylcarnitines for fatty acid metabolism, and the head contained high levels of ether lipids. The male accessory gland uniquely contained decarboxylated S-adenosylmethionine. These data thus both provide valuable insights into tissue function, and a reference baseline, compatible with the FlyAtlas.org transcriptomic resource, for further metabolomic analysis of this important model organism, for example in the modelling of human inborn errors of metabolism, aging or metabolic imbalances such as diabetes.

  14. Robot Control Based On Spatial-Operator Algebra

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan

    1992-01-01

    Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.

  15. Slit Effect of Common Ground Patterns in Affecting Cross-Talk Noise between Two Parallel Signal Traces on Printed Circuit Boards

    NASA Astrophysics Data System (ADS)

    Maeno, Tsuyoshi; Sakurai, Yukihiko; Unou, Takanori; Ichikawa, Kouji; Fujiwara, Osamu

    It is well-known that electromagnetic (EM) disturbances in vehicle-mounted radios are mainly caused by conducted noise currents flowing through wiring-harnesses from vehicle-mounted printed circuit boards (PCBs) with common ground patterns with slits. To evaluate the noise current outflows from the PCBs of this kind, we previously measured noise current outflows from four types of simple three-layer PCBs having two perpendicular signal traces and different ground patterns with/without slits, and showed that slits on a ground pattern allow conducted noise currents to flow out from PCBs, while the levels for the symmetric slits ground type are smaller compared to the case for two asymmetric slits ground types. In the present study, to further investigate the above finding, we fabricated six types of simple two-layer PCBs having two parallel signal traces and different ground patterns with/without slits, and measured the cross-talk noise between the traces. As a result, we found that the ground patterns with the slits perpendicular to the traces increase the cross-talk noise levels, which are larger by 19-42 dB than those for the ground pattern with no slits, while the ground patterns with the slits in parallel with the traces can suppress the noise levels, which are slightly smaller by 2.5-4.5 dB compared to the case for the no-slit ground pattern. These results were confirmed by the FDTD simulation, and were also qualitatively explained from an equivalent bridge circuit model we previously proposed.

  16. Predicting charmonium and bottomonium spectra with a quark harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Badavi, F. F.; Townsend, L. W.

    1986-01-01

    The nonrelativistic quark model is applied to heavy (nonrelativistic) meson (two-body) systems to obtain sufficiently accurate predictions of the spin-averaged mass levels of the charmonium and bottomonium spectra as an example of the three-dimensional harmonic oscillator. The present calculations do not include any spin dependence, but rather, mass values are averaged for different spins. Results for a charmed quark mass value of 1500 MeV/c-squared show that the simple harmonic oscillator model provides good agreement with experimental values for 3P states, and adequate agreement for the 3S1 states.

  17. Ultrastrong Coupling Few-Photon Scattering Theory

    NASA Astrophysics Data System (ADS)

    Shi, Tao; Chang, Yue; García-Ripoll, Juan José

    2018-04-01

    We study the scattering of individual photons by a two-level system ultrastrongly coupled to a waveguide. The scattering is elastic for a broad range of couplings and can be described with an effective U (1 )-symmetric Hamiltonian. This simple model allows the prediction of scattering resonance line shapes, validated up to α =0.3 , and close to the Toulouse point α =1 /2 , where inelastic scattering becomes relevant. Our predictions model experiments with superconducting circuits [P. Forn-Díaz et al., Nat. Phys. 13, 39 (2017), 10.1038/nphys3905] and can be extended to study multiphoton scattering.

  18. Ultrastrong Coupling Few-Photon Scattering Theory.

    PubMed

    Shi, Tao; Chang, Yue; García-Ripoll, Juan José

    2018-04-13

    We study the scattering of individual photons by a two-level system ultrastrongly coupled to a waveguide. The scattering is elastic for a broad range of couplings and can be described with an effective U(1)-symmetric Hamiltonian. This simple model allows the prediction of scattering resonance line shapes, validated up to α=0.3, and close to the Toulouse point α=1/2, where inelastic scattering becomes relevant. Our predictions model experiments with superconducting circuits [P. Forn-Díaz et al., Nat. Phys. 13, 39 (2017)NPAHAX1745-247310.1038/nphys3905] and can be extended to study multiphoton scattering.

  19. Radiatively induced neutrino mass model with flavor dependent gauge symmetry

    NASA Astrophysics Data System (ADS)

    Lee, SangJong; Nomura, Takaaki; Okada, Hiroshi

    2018-06-01

    We study a radiative seesaw model at one-loop level with a flavor dependent gauge symmetry U(1) μ - τ, in which we consider bosonic dark matter. We also analyze the constraints from lepton flavor violations, muon g - 2, relic density of dark matter, and collider physics, and carry out numerical analysis to search for allowed parameter region which satisfy all the constraints and to investigate some predictions. Furthermore we find that a simple but adhoc hypothesis induces specific two zero texture with inverse mass matrix, which provides us several predictions such as a specific pattern of Dirac CP phase.

  20. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  1. Nuclear Checker Board Model

    NASA Astrophysics Data System (ADS)

    Lach, Theodore

    2017-01-01

    The Checkerboard model of the Nucleus has been in the public domain for over 20 years. Over those years it has been described by nuclear and particle physicists as; cute, ``the Bohr model of the nucleus'' and ``reminiscent of the Eightfold Way''. It has also been ridiculed as numerology, laughed at, and even worse. In 2000 the theory was taken to the next level by attempting to explain why the mass of the ``up'' and ``dn'' quarks were significantly heavier than the SM ``u'' and ``d'' quarks. This resulted in a paper published on arXiv.nucl-th/0008026 in 2000, predicting 5 generations of quarks, each quark and negative lepton particle related to each other by a simple geometric mean. The CBM predicts that the radii of the elementary particles are proportional to the cube root of their masses. This was realized Pythagorean musical intervals (octave, perfect 5th, perfect 4th plus two others). Therefore each generation can be explained by a simple right triangle and the height of the hypotenuse. Notice that the height of a right triangle breaks the hypotenuse into two line segments. The geometric mean of those two segments equals the length of the height of this characteristic triangle. Therefore the CBM theory now predicts that all the elementary particles mass are proportion to the cube of their radii. Therefore the mass density of all elementary particles (and perhaps black holes too) are a constant of nature.

  2. the Role of Species, Structure, and Biochemical Traits in the Spatial Distribution of a Woodland Community

    NASA Astrophysics Data System (ADS)

    Adeline, K.; Ustin, S.; Roth, K. L.; Huesca Martinez, M.; Schaaf, C.; Baldocchi, D. D.; Gastellu-Etchegorry, J. P.

    2015-12-01

    The assessment of canopy biochemical diversity is critical for monitoring ecological and physiological functioning and for mapping vegetation change dynamics in relation to environmental resources. For example in oak woodland savannas, these dynamics are mainly driven by water constraints. Inversion using radiative transfer theory is one method for estimating canopy biochemistry. However, this approach generally only considers relatively simple scenarios to model the canopy due to the difficulty in encompassing stand heterogeneity with spatial and temporal consistency. In this research, we compared 3 modeling strategies for estimating canopy biochemistry variables (i.e. chlorophyll, carotenoids, water, dry matter) by coupling of the PROSPECT (leaf level) and DART (canopy level) models : i) a simple forest representation made of ellipsoid trees, and two representations taking into account the tree species and structural composition, and the landscape spatial pattern, using (ii) geometric tree crown shapes and iii) detailed tree crown and wood structure retrieved from terrestrial lidar acquisitions. AVIRIS 18m remote sensing data are up-scaled to simulate HyspIRI 30m images. Both spatial resolutions are validated by measurements acquired during 2013-2014 field campaigns (cover/tree inventory, LAI, leaf sampling, optical measures). The results outline the trade-off between accurate and abstract canopy modeling for inversion purposes and may provide perspectives to assess the impact of the California drought with multi-temporal monitoring of canopy biochemistry traits.

  3. A simple two-stage model predicts response time distributions.

    PubMed

    Carpenter, R H S; Reddi, B A J; Anderson, A J

    2009-08-15

    The neural mechanisms underlying reaction times have previously been modelled in two distinct ways. When stimuli are hard to detect, response time tends to follow a random-walk model that integrates noisy sensory signals. But studies investigating the influence of higher-level factors such as prior probability and response urgency typically use highly detectable targets, and response times then usually correspond to a linear rise-to-threshold mechanism. Here we show that a model incorporating both types of element in series - a detector integrating noisy afferent signals, followed by a linear rise-to-threshold performing decision - successfully predicts not only mean response times but, much more stringently, the observed distribution of these times and the rate of decision errors over a wide range of stimulus detectability. By reconciling what previously may have seemed to be conflicting theories, we are now closer to having a complete description of reaction time and the decision processes that underlie it.

  4. Reconciling projections of the Antarctic contribution to sea level rise

    NASA Astrophysics Data System (ADS)

    Edwards, Tamsin; Holden, Philip; Edwards, Neil; Wernecke, Andreas

    2017-04-01

    Two recent studies of the Antarctic contribution to sea level rise this century had best estimates that differed by an order of magnitude (around 10 cm and 1 m by 2100). The first, Ritz et al. (2015), used a model calibrated with satellite data, giving a 5% probability of exceeding 30cm by 2100 for sea level rise due to Antarctic instability. The second, DeConto and Pollard (2016), used a model evaluated with reconstructions of palaeo-sea level. They did not estimate probabilities, but using a simple assumption here about the distribution shape gives up to a 5% chance of Antarctic contribution exceeding 2.3 m this century with total sea level rise approaching 3 m. If robust, this would have very substantial implications for global adaptation to climate change. How are we to make sense of this apparent inconsistency? How much is down to the data - does the past tell us we will face widespread and rapid Antarctic ice losses in the future? How much is due to the mechanism of rapid ice loss ('cliff failure') proposed in the latter paper, or other parameterisation choices in these low resolution models (GRISLI and PISM, respectively)? How much is due to choices made in the ensemble design and calibration? How do these projections compare with high resolution, grounding line resolving models such as BISICLES? Could we reduce the huge uncertainties in the palaeo-study? Emulation provides a powerful tool for understanding these questions and reconciling the projections. By describing the three numerical ice sheet models with statistical models, we can re-analyse the ensembles and re-do the calibrations under a common statistical framework. This reduces uncertainty in the PISM study because it allows massive sampling of the parameter space, which reduces the sensitivity to reconstructed palaeo-sea level values and also narrows the probability intervals because the simple assumption about distribution shape above is no longer needed. We present reconciled probabilistic projections for the Antarctic contribution to sea level rise from GRISLI, PISM and BISICLES this century, giving results that are meaningful and interpretable by decision-makers.

  5. Two-state model based on the block-localized wave function method

    NASA Astrophysics Data System (ADS)

    Mo, Yirong

    2007-06-01

    The block-localized wave function (BLW) method is a variant of ab initio valence bond method but retains the efficiency of molecular orbital methods. It can derive the wave function for a diabatic (resonance) state self-consistently and is available at the Hartree-Fock (HF) and density functional theory (DFT) levels. In this work we present a two-state model based on the BLW method. Although numerous empirical and semiempirical two-state models, such as the Marcus-Hush two-state model, have been proposed to describe a chemical reaction process, the advantage of this BLW-based two-state model is that no empirical parameter is required. Important quantities such as the electronic coupling energy, structural weights of two diabatic states, and excitation energy can be uniquely derived from the energies of two diabatic states and the adiabatic state at the same HF or DFT level. Two simple examples of formamide and thioformamide in the gas phase and aqueous solution were presented and discussed. The solvation of formamide and thioformamide was studied with the combined ab initio quantum mechanical and molecular mechanical Monte Carlo simulations, together with the BLW-DFT calculations and analyses. Due to the favorable solute-solvent electrostatic interaction, the contribution of the ionic resonance structure to the ground state of formamide and thioformamide significantly increases, and for thioformamide the ionic form is even more stable than the covalent form. Thus, thioformamide in aqueous solution is essentially ionic rather than covalent. Although our two-state model in general underestimates the electronic excitation energies, it can predict relative solvatochromic shifts well. For instance, the intense π →π* transition for formamide upon solvation undergoes a redshift of 0.3eV, compared with the experimental data (0.40-0.5eV).

  6. A comparison of hydrologic models for ecological flows and water availability

    USGS Publications Warehouse

    Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G

    2015-01-01

    Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.

  7. Income Distribution Over Educational Levels: A Simple Model.

    ERIC Educational Resources Information Center

    Tinbergen, Jan

    An econometric model is formulated that explains income per person in various compartments of the labor market defined by three main levels of education and by education required. The model enables an estimation of the effect of increased access to education on that distribution. The model is based on a production for the economy as a whole; a…

  8. Multiphase flow in geometrically simple fracture intersections

    USGS Publications Warehouse

    Basagaoglu, H.; Meakin, P.; Green, C.T.; Mathew, M.; ,

    2006-01-01

    A two-dimensional lattice Boltzmann (LB) model with fluid-fluid and solid-fluid interaction potentials was used to study gravity-driven flow in geometrically simple fracture intersections. Simulated scenarios included fluid dripping from a fracture aperture, two-phase flow through intersecting fractures and thin-film flow on smooth and undulating solid surfaces. Qualitative comparisons with recently published experimental findings indicate that for these scenarios the LB model captured the underlying physics reasonably well.

  9. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  10. Two simple models of classical heat pumps.

    PubMed

    Marathe, Rahul; Jayannavar, A M; Dhar, Abhishek

    2007-03-01

    Motivated by recent studies of models of particle and heat quantum pumps, we study similar simple classical models and examine the possibility of heat pumping. Unlike many of the usual ratchet models of molecular engines, the models we study do not have particle transport. We consider a two-spin system and a coupled oscillator system which exchange heat with multiple heat reservoirs and which are acted upon by periodic forces. The simplicity of our models allows accurate numerical and exact solutions and unambiguous interpretation of results. We demonstrate that while both our models seem to be built on similar principles, one is able to function as a heat pump (or engine) while the other is not.

  11. Colloidal membranes: The rich confluence of geometry and liquid crystals

    NASA Astrophysics Data System (ADS)

    Kaplan, Cihan Nadir

    A simple and experimentally realizable model system of chiral symmetry breaking is liquid-crystalline monolayers of aligned, identical hard rods. In these materials, tuning the chirality at the molecular level affects the geometry at systems level, thereby inducing a myriad of morphological transitions. This thesis presents theoretical studies motivated by the rich phenomenology of these colloidal monolayers. High molecular chirality leads to assemblages of rods exhibiting macroscopic handedness. In the first part we consider one such geometry, twisted ribbons, which are minimal surfaces to a double helix. By employing a theoretical approach that combines liquid-crystalline order with the preferred shape, we focus on the phase transition from simple flat monolayers to these twisted structures. In these monolayers, regions of broken chiral symmetry nucleate at the interfaces, as in a chiral smectic A sample. The second part particularly focuses on the detailed structure and thermodynamic stability of two types of observed interfaces, the monolayer edge and domain walls in simple flat monolayers. Both the edge and "twist-walls" are quasi-one-dimensional bands of molecular twist deformations dictated by local chiral interactions and surface energy considerations. We develop a unified theory of these interfaces by utilizing the de Gennes framework accompanied by appropriate surface energy terms. The last part turns to colloidal "cookies", which form in mixtures of rods with opposite handedness. These elegant structures are essentially flat monolayers surrounded by an array of local, three dimensional cusp defects. We reveal the thermodynamic and structural characteristics of cookies. Furthermore, cookies provide us with a simple relation to determine the intrinsic curvature modulus of our model system, an important constant associated with topological properties of membranes. Our results may have impacts on a broader class of soft thin films.

  12. Application of artificial neural network model for groundwater level forecasting in a river island with artificial influencing factors

    NASA Astrophysics Data System (ADS)

    Lee, Sanghoon; Yoon, Heesung; Park, Byeong-Hak; Lee, Kang-Kun

    2017-04-01

    Groundwater use has been increased for various purposes like agriculture, industry or drinking water in recent years, the issue related to sustainability on the groundwater use also has been raised. Accordingly, forecasting the groundwater level is of great importance for planning sustainable use of groundwater. In a small island surrounded by the Han River, South Korea, seasonal fluctuation of the groundwater level is characterized by multiple factors such as recharge/discharge event of the Paldang dam, Water Curtain Cultivation (WCC) during the winter season, operation of Groundwater Heat Pump System (GWHP). For a period when the dam operation is only occurred in the study area, a prediction of the groundwater level can be easily achieved by a simple cross-correlation model. However, for a period when the WCC and the GWHP systems are working together, the groundwater level prediction is challenging due to its unpredictable operation of the two systems. This study performed Artificial Neural Network (ANN) model to forecast the groundwater level in the river area reflecting the various predictable/unpredictable factors. For constructing the ANN models, two monitoring wells, YSN1 and YSO8, which are located near the injection and abstraction wells for the GWHP system were selected, respectively. By training with the groundwater level data measured in January 2015 to August 2015, response of groundwater level by each of the surface water level, the WCC and the GWHP system were evaluated. Consequentially, groundwater levels in December 2015 to March 2016 were predicted by ANN models, providing optimal fits in comparison to the observed water levels. This study suggests that the ANN model is a useful tool to forecast the groundwater level in terms of the management of groundwater. Acknowledgement : Financial support was provided by the "R&D Project on Environmental Management of Geologic CO2 Storage" from the KEITI (Project Number: 2014001810003) This research was supported by "BK 21plus project of the Korean Government"

  13. Modelling stream aquifer seepage in an alluvial aquifer: an improved loosing-stream package for MODFLOW

    NASA Astrophysics Data System (ADS)

    Osman, Yassin Z.; Bruen, Michael P.

    2002-07-01

    Seepage from a stream, which partially penetrates an unconfined alluvial aquifer, is studied for the case when the water table falls below the streambed level. Inadequacies are identified in current modelling approaches to this situation. A simple and improved method of incorporating such seepage into groundwater models is presented. This considers the effect on seepage flow of suction in the unsaturated part of the aquifer below a disconnected stream and allows for the variation of seepage with water table fluctuations. The suggested technique is incorporated into the saturated code MODFLOW and is tested by comparing its predictions with those of a widely used variably saturated model, SWMS_2D simulating water flow and solute transport in two-dimensional variably saturated media. Comparisons are made of both seepage flows and local mounding of the water table. The suggested technique compares very well with the results of variably saturated model simulations. Most currently used approaches are shown to underestimate the seepage and associated local water table mounding, sometimes substantially. The proposed method is simple, easy to implement and requires only a small amount of additional data about the aquifer hydraulic properties.

  14. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  15. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  16. The ST environment: Expected charged particle radiation levels

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1978-01-01

    The external (surface incident) charged particle radiation, predicted for the ST satellite at the three different mission altitudes, was determined in two ways: (1) by orbital flux-integration and (2) by geographical instantaneous flux-mapping. The latest standard models of the environment were used in this effort. Magnetic field definitions for three nominal circular trajectories and for the geographic mapping positions were obtained from a current field model. Spatial and temporal variations or conditions affecting the static environment models were considered and accounted for, wherever possible. Limited shielding and dose evaluations were performed for a simple geometry. Results, given in tabular and graphical form, are analyzed, explained, and discussed. Conclusions are included.

  17. A Simple Model Framework to Explore the Deeply Uncertain, Local Sea Level Response to Climate Change. A Case Study on New Orleans, Louisiana

    NASA Astrophysics Data System (ADS)

    Bakker, Alexander; Louchard, Domitille; Keller, Klaus

    2016-04-01

    Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.

  18. A simple model of proton damage in GaAs solar cells

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Walker, G. H.; Outlaw, R. A.

    1982-01-01

    A simple proton damage model for GaAs solar cells is derived and compared to experimental values of change in short circuit currents. The recombination cross section associated with the defects was determined from the experimental comparison to be approximately 1.2 x 10 to the -13th power sq cm in fair agreement with values determined from the deep level transient spectroscopy technique.

  19. Effect of quantum nuclear motion on hydrogen bonding

    NASA Astrophysics Data System (ADS)

    McKenzie, Ross H.; Bekker, Christiaan; Athokpam, Bijyalaxmi; Ramesh, Sai G.

    2014-05-01

    This work considers how the properties of hydrogen bonded complexes, X-H⋯Y, are modified by the quantum motion of the shared proton. Using a simple two-diabatic state model Hamiltonian, the analysis of the symmetric case, where the donor (X) and acceptor (Y) have the same proton affinity, is carried out. For quantitative comparisons, a parametrization specific to the O-H⋯O complexes is used. The vibrational energy levels of the one-dimensional ground state adiabatic potential of the model are used to make quantitative comparisons with a vast body of condensed phase data, spanning a donor-acceptor separation (R) range of about 2.4 - 3.0 Å, i.e., from strong to weak hydrogen bonds. The position of the proton (which determines the X-H bond length) and its longitudinal vibrational frequency, along with the isotope effects in both are described quantitatively. An analysis of the secondary geometric isotope effect, using a simple extension of the two-state model, yields an improved agreement of the predicted variation with R of frequency isotope effects. The role of bending modes is also considered: their quantum effects compete with those of the stretching mode for weak to moderate H-bond strengths. In spite of the economy in the parametrization of the model used, it offers key insights into the defining features of H-bonds, and semi-quantitatively captures several trends.

  20. An ecological approach to hearing-health promotion in workplaces.

    PubMed

    Reddy, Ravi; Welch, David; Ameratunga, Shanthi; Thorne, Peter

    2017-05-01

    To develop and assess use, acceptability and feasibility of an ecological hearing conservation programme for workplaces. A school-based public health hearing preservation education programme (Dangerous Decibels®) was adapted for workplaces using the Multi-level Approach to Community Health (MATCH) Model. The programme was delivered in small manufacturing companies and evaluated using a questionnaire before the training and at one week and two-months after training. Workers (n = 56) from five small manufacturing companies were recruited. There was a significant improvement in knowledge, attitudes and behaviour of workers at the intrapersonal level; in behaviour motivation and safety culture at the interpersonal and organisational levels; and an overall improvement in hearing-health behaviour after two months post-intervention. The developed programme offers a simple, interactive and theory-based intervention that is well accepted and effective in promoting positive hearing-health behaviour in workplaces.

  1. Optical memory based on quantized atomic center-of-mass motion.

    PubMed

    Lopez, J P; de Almeida, A J F; Felinto, D; Tabosa, J W R

    2017-11-01

    We report a new type of optical memory using a pure two-level system of cesium atoms cooled by the magnetically assisted Sisyphus effect. The optical information of a probe field is stored in the coherence between quantized vibrational levels of the atoms in the potential wells of a 1-D optical lattice. The retrieved pulse shows Rabi oscillations with a frequency determined by the reading beam intensity and are qualitatively understood in terms of a simple theoretical model. The exploration of the external degrees of freedom of an atom may add another capability in the design of quantum-information protocols using light.

  2. A Back-to-Front Derivation: The Equal Spacing of Quantum Levels Is a Proof of Simple Harmonic Oscillator Physics

    ERIC Educational Resources Information Center

    Andrews, David L.; Romero, Luciana C. Davila

    2009-01-01

    The dynamical behaviour of simple harmonic motion can be found in numerous natural phenomena. Within the quantum realm of atomic, molecular and optical systems, two main features are associated with harmonic oscillations: a finite ground-state energy and equally spaced quantum energy levels. Here it is shown that there is in fact a one-to-one…

  3. A Computational Model of Linguistic Humor in Puns.

    PubMed

    Kao, Justine T; Levy, Roger; Goodman, Noah D

    2016-07-01

    Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. © 2015 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  4. Modelling the root system architecture of Poaceae. Can we simulate integrated traits from morphological parameters of growth and branching?

    PubMed

    Pagès, Loïc; Picon-Cochard, Catherine

    2014-10-01

    Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.

  5. Incentives for Optimal Multi-level Allocation of HIV Prevention Resources

    PubMed Central

    Malvankar, Monali M.; Zaric, Gregory S.

    2013-01-01

    HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551

  6. A coarse-grained biophysical model of sequence evolution and the population size dependence of the speciation rate

    PubMed Central

    Khatri, Bhavin S.; Goldstein, Richard A.

    2015-01-01

    Speciation is fundamental to understanding the huge diversity of life on Earth. Although still controversial, empirical evidence suggests that the rate of speciation is larger for smaller populations. Here, we explore a biophysical model of speciation by developing a simple coarse-grained theory of transcription factor-DNA binding and how their co-evolution in two geographically isolated lineages leads to incompatibilities. To develop a tractable analytical theory, we derive a Smoluchowski equation for the dynamics of binding energy evolution that accounts for the fact that natural selection acts on phenotypes, but variation arises from mutations in sequences; the Smoluchowski equation includes selection due to both gradients in fitness and gradients in sequence entropy, which is the logarithm of the number of sequences that correspond to a particular binding energy. This simple consideration predicts that smaller populations develop incompatibilities more quickly in the weak mutation regime; this trend arises as sequence entropy poises smaller populations closer to incompatible regions of phenotype space. These results suggest a generic coarse-grained approach to evolutionary stochastic dynamics, allowing realistic modelling at the phenotypic level. PMID:25936759

  7. A two-model hydrologic ensemble prediction of hydrograph: case study from the upper Nysa Klodzka river basin (SW Poland)

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej

    2016-04-01

    The HydroProg system has been elaborated in frame of the research project no. 2011/01/D/ST10/04171 of the National Science Centre of Poland and is steadily producing multimodel ensemble predictions of hydrograph in real time. Although there are six ensemble members available at present, the longest record of predictions and their statistics is available for two data-based models (uni- and multivariate autoregressive models). Thus, we consider 3-hour predictions of water levels, with lead times ranging from 15 to 180 minutes, computed every 15 minutes since August 2013 for the Nysa Klodzka basin (SW Poland) using the two approaches and their two-model ensemble. Since the launch of the HydroProg system there have been 12 high flow episodes, and the objective of this work is to present the performance of the two-model ensemble in the process of forecasting these events. For a sake of brevity, we limit our investigation to a single gauge located at the Nysa Klodzka river in the town of Klodzko, which is centrally located in the studied basin. We identified certain regular scenarios of how the models perform in predicting the high flows in Klodzko. At the initial phase of the high flow, well before the rising limb of hydrograph, the two-model ensemble is found to provide the most skilful prognoses of water levels. However, while forecasting the rising limb of hydrograph, either the two-model solution or the vector autoregressive model offers the best predictive performance. In addition, it is hypothesized that along with the development of the rising limb phase, the vector autoregression becomes the most skilful approach amongst the scrutinized ones. Our simple two-model exercise confirms that multimodel hydrologic ensemble predictions cannot be treated as universal solutions suitable for forecasting the entire high flow event, but their superior performance may hold only for certain phases of a high flow.

  8. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Atmostpheric simulations of extreme surface heating episodes on simple hills

    Treesearch

    W.E. Heilman

    1992-01-01

    A two-dimensional nonhydrostatic atmospheric model was used to simulate the circulation patterns (wind and vorticity) and turbulence energy fields associated with lines of extreme surface heating on simple two-dimensional hills. Heating-line locations and ambient crossflow conditions were varied to qualitatively determine the impact of terrain geometry on the...

  10. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  11. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  12. A Model for General Parenting Skill is Too Simple: Mediational Models Work Better.

    ERIC Educational Resources Information Center

    Patterson, G. R.; Yoerger, K.

    A study was designed to determine whether mediational models of parenting patterns account for significantly more variance in academic achievement than more general models. Two general models and two mediational models were considered. The first model identified five skills: (1) discipline; (2) monitoring; (3) family problem solving; (4) positive…

  13. Role of conceptual models in a physical therapy curriculum: application of an integrated model of theory, research, and clinical practice.

    PubMed

    Darrah, Johanna; Loomis, Joan; Manns, Patricia; Norton, Barbara; May, Laura

    2006-11-01

    The Department of Physical Therapy, University of Alberta, Edmonton, Alberta, Canada, recently implemented a Master of Physical Therapy (MPT) entry-level degree program. As part of the curriculum design, two models were developed, a Model of Best Practice and the Clinical Decision-Making Model. Both models incorporate four key concepts of the new curriculum: 1) the concept that theory, research, and clinical practice are interdependent and inform each other; 2) the importance of client-centered practice; 3) the terminology and philosophical framework of the World Health Organization's International Classification of Functioning, Disability, and Health; and 4) the importance of evidence-based practice. In this article the general purposes of models for learning are described; the two models developed for the MPT program are described; and examples of their use with curriculum design and teaching are provided. Our experiences with both the development and use of models of practice have been positive. The models have provided both faculty and students with a simple, systematic structured framework to organize teaching and learning in the MPT program.

  14. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  15. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.

  16. A Bayesian zero-truncated approach for analysing capture-recapture count data from classical scrapie surveillance in France.

    PubMed

    Vergne, Timothée; Calavas, Didier; Cazeau, Géraldine; Durand, Benoît; Dufour, Barbara; Grosbois, Vladimir

    2012-06-01

    Capture-recapture (CR) methods are used to study populations that are monitored with imperfect observation processes. They have recently been applied to the monitoring of animal diseases to evaluate the number of infected units that remain undetected by the surveillance system. This paper proposes three bayesian models to estimate the total number of scrapie-infected holdings in France from CR count data obtained from the French classical scrapie surveillance programme. We fitted two zero-truncated Poisson (ZTP) models (with and without holding size as a covariate) and a zero-truncated negative binomial (ZTNB) model to the 2006 national surveillance count dataset. We detected a large amount of heterogeneity in the count data, making the use of the simple ZTP model inappropriate. However, including holding size as a covariate did not bring any significant improvement over the simple ZTP model. The ZTNB model proved to be the best model, giving an estimation of 535 (CI(95%) 401-796) infected and detectable sheep holdings in 2006, although only 141 were effectively detected, resulting in a holding-level prevalence of 4.4‰ (CI(95%) 3.2-6.3) and a sensitivity of holding-level surveillance of 26% (CI(95%) 18-35). The main limitation of the present study was the small amount of data collected during the surveillance programme. It was therefore not possible to build complex models that would allow depicting more accurately the epidemiological and detection processes that generate the surveillance data. We discuss the perspectives of capture-recapture count models in the context of animal disease surveillance. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  18. Exactly soluble model of the time-resolved fluorescence return to thermal equilibrium in many-particle systems after excitation

    NASA Astrophysics Data System (ADS)

    Czachor, Andrzej

    2016-02-01

    In this paper we consider the assembly of weakly interacting identical particles, where the occupation of single-particle energy-levels at thermal equilibrium is governed by statistics. The analytic form of the inter-energy-level jump matrix is derived and analytic solution of the related eigen-problem is given. It allows one to demonstrate the nature of decline in time of the energy emission (fluorescence, recombination) of such many-level system after excitation in a relatively simple and unifying way - as a multi-exponential de-excitation. For the system of L energy levels the number of the de-excitation lifetimes is L-1. The lifetimes depend on the energy level spectrum as a whole. Two- and three-level systems are considered in detail. The impact of the energy level degeneracy on the lifetimes is discussed.

  19. NNLO computational techniques: The cases H→γγ and H→gg

    NASA Astrophysics Data System (ADS)

    Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro

    2009-04-01

    A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV

  20. Additive schemes for certain operator-differential equations

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2010-12-01

    Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.

  1. Coupling Climate Models and Forward-Looking Economic Models

    NASA Astrophysics Data System (ADS)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward-looking economic modules, and the initial models will help guide the construction of more refined models that can effectively use more powerful computational environments to analyze economic policies related to climate change. REFERENCES Brock, W., Xepapadeas, A., 2010, “An Integration of Simple Dynamic Energy Balance Climate Models and Ramsey Growth Models,” Department of Economics, University of Wisconsin, Madison, and University of Athens. Golub, A., Hertel, T., etal., 2009, “The opportunity cost of land use and the global potential for greenhouse gas mitigation in agriculture and forestry,” RESOURCE AND ENERGY ECONOMICS, 31, 299-319. Judd, K., 1992, “Projection methods for solving aggregate growth models,” JOURNAL OF ECONOMIC THEORY, 58: 410-52. Judd, K., 1998, NUMERICAL METHODS IN ECONOMICS, MIT Press, Cambridge, Mass. Nordhaus, W., 2007, A QUESTION OF BALANCE: ECONOMIC MODELS OF CLIMATE CHANGE, Yale University Press, New Haven, CT. North, G., R., Cahalan, R., Coakely, J., 1981, “Energy balance climate models,” REVIEWS OF GEOPHYSICS AND SPACE PHYSICS, Vol. 19, No. 1, 91-121, February Wu, W., North, G. R., 2007, “Thermal decay modes of a 2-D energy balance climate model,” TELLUS, 59A, 618-626.

  2. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  3. Spectroscopy of samarium isotopes in the sdg interacting boson model

    NASA Astrophysics Data System (ADS)

    Devi, Y. D.; Kota, V. K. B.

    1992-05-01

    Successful spectroscopic calculations for the 0+1, 2+1, and 4+1 levels in 146-158Sm are carried out in sdg boson space with the restriction that the s-boson number ns>=2 and the g-boson number ng<=2. Observed energies, quadrupole and magnetic moments, E2 and E4 transition strengths, nuclear radii, and two-nucleon transfer intensities are reproduced with a simple two-parameter Hamiltonian. For a good simultaneous description of ground, β, and γ bands, a Hamiltonian interpolating the dynamical symmetries in the sdg model is employed. Using the resulting wave functions, in 152,154Sm, the observed B(E40+1-->4+γ) values are well reproduced and E4 strength distributions are predicted. Moreover, a particular ratio scrR involving two-nucleon transfer strengths showing a peak at neutron number 90 is well described by the calculations.

  4. A Simple Approach to the Landau-Zener Formula

    ERIC Educational Resources Information Center

    Vutha, Amar C.

    2010-01-01

    The Landau-Zener formula provides the probability of non-adiabatic transitions occurring when two energy levels are swept through an avoided crossing. The formula is derived here in a simple calculation that emphasizes the physics responsible for non-adiabatic population transfer. (Contains 2 figures.)

  5. Investigating decoherence in a simple system

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1991-01-01

    The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.

  6. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    NASA Astrophysics Data System (ADS)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  7. Assessing predation risk: optimal behaviour and rules of thumb.

    PubMed

    Welton, Nicky J; McNamara, John M; Houston, Alasdair I

    2003-12-01

    We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.

  8. A positive feedback at the cellular level promotes robustness and modulation at the circuit level

    PubMed Central

    Dethier, Julie; Drion, Guillaume; Franci, Alessio

    2015-01-01

    This article highlights the role of a positive feedback gating mechanism at the cellular level in the robustness and modulation properties of rhythmic activities at the circuit level. The results are presented in the context of half-center oscillators, which are simple rhythmic circuits composed of two reciprocally connected inhibitory neuronal populations. Specifically, we focus on rhythms that rely on a particular excitability property, the postinhibitory rebound, an intrinsic cellular property that elicits transient membrane depolarization when released from hyperpolarization. Two distinct ionic currents can evoke this transient depolarization: a hyperpolarization-activated cation current and a low-threshold T-type calcium current. The presence of a slow activation is specific to the T-type calcium current and provides a slow positive feedback at the cellular level that is absent in the cation current. We show that this slow positive feedback is required to endow the network rhythm with physiological modulation and robustness properties. This study thereby identifies an essential cellular property to be retained at the network level in modeling network robustness and modulation. PMID:26311181

  9. A simple biosphere model (SiB) for use within general circulation models

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Mintz, Y.; Sud, Y. C.; Dalcher, A.

    1986-01-01

    A simple realistic biosphere model for calculating the transfer of energy, mass and momentum between the atmosphere and the vegetated surface of the earth has been developed for use in atmospheric general circulation models. The vegetation in each terrestrial model grid is represented by an upper level, representing the perennial canopy of trees and shrubs, and a lower level, representing the annual cover of grasses and other heraceous species. The vegetation morphology and the physical and physiological properties of the vegetation layers determine such properties as: the reflection, transmission, absorption and emission of direct and diffuse radiation; the infiltration, drainage, and storage of the residual rainfall in the soil; and the control over the stomatal functioning. The model, with prescribed vegetation parameters and soil interactive soil moisture, can be used for prediction of the atmospheric circulation and precipitaion fields for short periods of up to a few weeks.

  10. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed Central

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-01-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328

  11. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-09-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.

  12. Design Through Manufacturing: The Solid Model-Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2002-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts reflecting every detail of the finished product. Ideally, in the aerospace industry, these models should fulfill two very important functions: (1) provide numerical. control information for automated manufacturing of precision parts, and (2) enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in aircraft and space vehicles. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. Presently, the process of preparing CAD models for FEA consumes a great deal of the analyst's time.

  13. Two Activities with a Simple Model of the Solar System: Discovering Kepler's 3rd Law and Investigating Apparent Motion of Venus

    ERIC Educational Resources Information Center

    Rovšek, Barbara; Guštin, Andrej

    2018-01-01

    An astronomy "experiment" composed of three parts is described in the article. Being given necessary data a simple model of inner planets of the solar system is made in the first part with planets' circular orbits using appropriate scale. In the second part revolution of the figurines used as model representations of the planets along…

  14. Impact of river water levels on the simulation of stream-aquifer exchanges over the Upper Rhine alluvial aquifer (France/Germany)

    NASA Astrophysics Data System (ADS)

    Vergnes, Jean-Pierre; Habets, Florence

    2018-05-01

    This study aims to assess the sensitivity of river level estimations to the stream-aquifer exchanges within a hydrogeological model of the Upper Rhine alluvial aquifer (France/Germany), characterized as a large shallow aquifer with numerous hydropower dams. Two specific points are addressed: errors associated with digital elevation models (DEMs) and errors associated with the estimation of river level. The fine-resolution raw Shuttle Radar Topographic Mission dataset is used to assess the impact of the DEM uncertainties. Specific corrections are used to overcome these uncertainties: a simple moving average is applied to the topography along the rivers and additional data are used along the Rhine River to account for the numerous dams. Then, the impact of the river-level temporal variations is assessed through two different methods based on observed rating curves and on the Manning formula. Results are evaluated against observation data from 37 river-level points located over the aquifer, 190 piezometers, and a spatial database of wetlands. DEM uncertainties affect the spatial variability of the stream-aquifer exchanges by inducing strong noise and unrealistic peaks. The corrected DEM reduces the biases between observations and simulations by 22 and 51% for the river levels and the river discharges, respectively. It also improves the agreement between simulated groundwater overflows and observed wetlands. Introducing river-level time variability increases the stream-aquifer exchange range and reduces the piezometric head variability. These results confirm the need to better assess river levels in regional hydrogeological modeling, especially for applications in which stream-aquifer exchanges are important.

  15. Modeling of two-phase porous flow with damage

    NASA Astrophysics Data System (ADS)

    Cai, Z.; Bercovici, D.

    2009-12-01

    Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.

  16. Energy economy in the actomyosin interaction: lessons from simple models.

    PubMed

    Lehman, Steven L

    2010-01-01

    The energy economy of the actomyosin interaction in skeletal muscle is both scientifically fascinating and practically important. This chapter demonstrates how simple cross-bridge models have guided research regarding the energy economy of skeletal muscle. Parameter variation on a very simple two-state strain-dependent model shows that early events in the actomyosin interaction strongly influence energy efficiency, and late events determine maximum shortening velocity. Addition of a weakly-bound state preceding force production allows weak coupling of cross-bridge mechanics and ATP turnover, so that a simple three-state model can simulate the velocity-dependence of ATP turnover. Consideration of the limitations of this model leads to a review of recent evidence regarding the relationship between ligand binding states, conformational states, and macromolecular structures of myosin cross-bridges. Investigation of the fine structure of the actomyosin interaction during the working stroke continues to inform fundamental research regarding the energy economy of striated muscle.

  17. Simulation of Acoustic Noise Generated by an Airbreathing, Beam-Powered Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Kennedy, W. C.; Van Laak, P.; Scarton, H. A.; Myrabo, L. N.

    2005-04-01

    A simple acoustic model is developed for predicting the noise signature vs. power level for advanced laser-propelled lightcraft — capable of single-stage flights into low Earth orbit. This model predicts the noise levels generated by a pulsed detonation engine (PDE) during the initial lift-off and acceleration phase, for two representative `tractor-beam' lightcraft designs: a 1-place `Mercury' vehicle (2.5-m diameter, 900-kg); and a larger 5-place `Apollo' vehicle (5-m diameter, 5555-kg) — both the subject of an earlier study. The use of digital techniques to simulate the expected PDE noise signature is discussed, and three examples of fly-by noise signatures are presented. The reduction, or complete elimination of perceptible noise from such engines, can be accomplished by shifting the pulse frequency into the supra-audible or sub-audible range.

  18. Improving the chi-squared approximation for bivariate normal tolerance regions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.

    1993-01-01

    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation.

  19. A simple 2D biofilm model yields a variety of morphological features.

    PubMed

    Hermanowicz, S W

    2001-01-01

    A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.

  20. Polycrystalline ZrTe 5 Parametrized as a Narrow-Band-Gap Semiconductor for Thermoelectric Performance

    DOE PAGES

    Miller, Samuel A.; Witting, Ian; Aydemir, Umut; ...

    2018-01-24

    The transition-metal pentatellurides HfTe 5 and ZrTe 5 have been studied for their exotic transport properties with much debate over the transport mechanism, band gap, and cause of the resistivity behavior, including a large low-temperature resistivity peak. Single crystals grown by the chemical-vapor-transport method have shown an n-p transition of the Seebeck coefficient at the same temperature as a peak in the resistivity. We show that behavior similar to that of single crystals can be observed in iodine-doped polycrystalline samples but that undoped polycrystalline samples exhibit drastically different properties: they are p type over the entire temperature range. Additionally, themore » thermal conductivity for polycrystalline samples is much lower, 1.5 Wm -1 K -1, than previously reported for single crystals. It is found that the polycrystalline ZrTe 5 system can be modeled as a simple semiconductor with conduction and valence bands both contributing to transport, separated by a band gap of 20 meV. This model demonstrates to first order that a simple two-band model can explain the transition from n- to p-type behavior and the cause of the anomalous resistivity peak. Combined with the experimental data, the two-band model shows that carrier concentration variation is responsible for differences in behavior between samples. Using the two-band model, the thermoelectric performance at different doping levels is predicted, finding zT=0.2 and 0.1 for p and n type, respectively, at 300 K, and zT=0.23 and 0.32 for p and n type at 600 K. Given the reasonably high zT that is comparable in magnitude for both n and p type, a thermoelectric device with a single compound used for both legs is feasible.« less

  1. Polycrystalline ZrTe5 Parametrized as a Narrow-Band-Gap Semiconductor for Thermoelectric Performance

    NASA Astrophysics Data System (ADS)

    Miller, Samuel A.; Witting, Ian; Aydemir, Umut; Peng, Lintao; Rettie, Alexander J. E.; Gorai, Prashun; Chung, Duck Young; Kanatzidis, Mercouri G.; Grayson, Matthew; Stevanović, Vladan; Toberer, Eric S.; Snyder, G. Jeffrey

    2018-01-01

    The transition-metal pentatellurides HfTe5 and ZrTe5 have been studied for their exotic transport properties with much debate over the transport mechanism, band gap, and cause of the resistivity behavior, including a large low-temperature resistivity peak. Single crystals grown by the chemical-vapor-transport method have shown an n -p transition of the Seebeck coefficient at the same temperature as a peak in the resistivity. We show that behavior similar to that of single crystals can be observed in iodine-doped polycrystalline samples but that undoped polycrystalline samples exhibit drastically different properties: they are p type over the entire temperature range. Additionally, the thermal conductivity for polycrystalline samples is much lower, 1.5 Wm-1 K-1 , than previously reported for single crystals. It is found that the polycrystalline ZrTe5 system can be modeled as a simple semiconductor with conduction and valence bands both contributing to transport, separated by a band gap of 20 meV. This model demonstrates to first order that a simple two-band model can explain the transition from n - to p -type behavior and the cause of the anomalous resistivity peak. Combined with the experimental data, the two-band model shows that carrier concentration variation is responsible for differences in behavior between samples. Using the two-band model, the thermoelectric performance at different doping levels is predicted, finding z T =0.2 and 0.1 for p and n type, respectively, at 300 K, and z T =0.23 and 0.32 for p and n type at 600 K. Given the reasonably high z T that is comparable in magnitude for both n and p type, a thermoelectric device with a single compound used for both legs is feasible.

  2. Polycrystalline ZrTe 5 Parametrized as a Narrow-Band-Gap Semiconductor for Thermoelectric Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Samuel A.; Witting, Ian; Aydemir, Umut

    The transition-metal pentatellurides HfTe 5 and ZrTe 5 have been studied for their exotic transport properties with much debate over the transport mechanism, band gap, and cause of the resistivity behavior, including a large low-temperature resistivity peak. Single crystals grown by the chemical-vapor-transport method have shown an n-p transition of the Seebeck coefficient at the same temperature as a peak in the resistivity. We show that behavior similar to that of single crystals can be observed in iodine-doped polycrystalline samples but that undoped polycrystalline samples exhibit drastically different properties: they are p type over the entire temperature range. Additionally, themore » thermal conductivity for polycrystalline samples is much lower, 1.5 Wm -1 K -1, than previously reported for single crystals. It is found that the polycrystalline ZrTe 5 system can be modeled as a simple semiconductor with conduction and valence bands both contributing to transport, separated by a band gap of 20 meV. This model demonstrates to first order that a simple two-band model can explain the transition from n- to p-type behavior and the cause of the anomalous resistivity peak. Combined with the experimental data, the two-band model shows that carrier concentration variation is responsible for differences in behavior between samples. Using the two-band model, the thermoelectric performance at different doping levels is predicted, finding zT=0.2 and 0.1 for p and n type, respectively, at 300 K, and zT=0.23 and 0.32 for p and n type at 600 K. Given the reasonably high zT that is comparable in magnitude for both n and p type, a thermoelectric device with a single compound used for both legs is feasible.« less

  3. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  4. A simple model for the evolution of melt pond coverage on permeable Arctic sea ice

    NASA Astrophysics Data System (ADS)

    Popović, Predrag; Abbot, Dorian

    2017-05-01

    As the melt season progresses, sea ice in the Arctic often becomes permeable enough to allow for nearly complete drainage of meltwater that has collected on the ice surface. Melt ponds that remain after drainage are hydraulically connected to the ocean and correspond to regions of sea ice whose surface is below sea level. We present a simple model for the evolution of melt pond coverage on such permeable sea ice floes in which we allow for spatially varying ice melt rates and assume the whole floe is in hydrostatic balance. The model is represented by two simple ordinary differential equations, where the rate of change of pond coverage depends on the pond coverage. All the physical parameters of the system are summarized by four strengths that control the relative importance of the terms in the equations. The model both fits observations and allows us to understand the behavior of melt ponds in a way that is often not possible with more complex models. Examples of insights we can gain from the model are that (1) the pond growth rate is more sensitive to changes in bare sea ice albedo than changes in pond albedo, (2) ponds grow slower on smoother ice, and (3) ponds respond strongest to freeboard sinking on first-year ice and sidewall melting on multiyear ice. We also show that under a global warming scenario, pond coverage would increase, decreasing the overall ice albedo and leading to ice thinning that is likely comparable to thinning due to direct forcing. Since melt pond coverage is one of the key parameters controlling the albedo of sea ice, understanding the mechanisms that control the distribution of pond coverage will help improve large-scale model parameterizations and sea ice forecasts in a warming climate.

  5. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A level set method for determining critical curvatures for drainage and imbibition.

    PubMed

    Prodanović, Masa; Bryant, Steven L

    2006-12-15

    An accurate description of the mechanics of pore level displacement of immiscible fluids could significantly improve the predictions from pore network models of capillary pressure-saturation curves, interfacial areas and relative permeability in real porous media. If we assume quasi-static displacement, at constant pressure and surface tension, pore scale interfaces are modeled as constant mean curvature surfaces, which are not easy to calculate. Moreover, the extremely irregular geometry of natural porous media makes it difficult to evaluate surface curvature values and corresponding geometric configurations of two fluids. Finally, accounting for the topological changes of the interface, such as splitting or merging, is nontrivial. We apply the level set method for tracking and propagating interfaces in order to robustly handle topological changes and to obtain geometrically correct interfaces. We describe a simple but robust model for determining critical curvatures for throat drainage and pore imbibition. The model is set up for quasi-static displacements but it nevertheless captures both reversible and irreversible behavior (Haines jump, pore body imbibition). The pore scale grain boundary conditions are extracted from model porous media and from imaged geometries in real rocks. The method gives quantitative agreement with measurements and with other theories and computational approaches.

  7. The consequences of ignoring measurement invariance for path coefficients in structural equation models

    PubMed Central

    Guenole, Nigel; Brown, Anna

    2014-01-01

    We report a Monte Carlo study examining the effects of two strategies for handling measurement non-invariance – modeling and ignoring non-invariant items – on structural regression coefficients between latent variables measured with item response theory models for categorical indicators. These strategies were examined across four levels and three types of non-invariance – non-invariant loadings, non-invariant thresholds, and combined non-invariance on loadings and thresholds – in simple, partial, mediated and moderated regression models where the non-invariant latent variable occupied predictor, mediator, and criterion positions in the structural regression models. When non-invariance is ignored in the latent predictor, the focal group regression parameters are biased in the opposite direction to the difference in loadings and thresholds relative to the referent group (i.e., lower loadings and thresholds for the focal group lead to overestimated regression parameters). With criterion non-invariance, the focal group regression parameters are biased in the same direction as the difference in loadings and thresholds relative to the referent group. While unacceptable levels of parameter bias were confined to the focal group, bias occurred at considerably lower levels of ignored non-invariance than was previously recognized in referent and focal groups. PMID:25278911

  8. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Astrophysics Data System (ADS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-05-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  9. A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite

    NASA Technical Reports Server (NTRS)

    Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.

    1992-01-01

    A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.

  10. Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors

    NASA Technical Reports Server (NTRS)

    Matthies, Larry; Grandjean, Pierrick

    1993-01-01

    Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.

  11. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  12. Fractal Viscous Fingering in Fracture Networks

    NASA Astrophysics Data System (ADS)

    Boyle, E.; Sams, W.; Ferer, M.; Smith, D. H.

    2007-12-01

    We have used two very different physical models and computer codes to study miscible injection of a low- viscosity fluid into a simple fracture network, where it displaces a much-more viscous "defending" fluid through "rock" that is otherwise impermeable. The one code (NETfLow) is a standard pore level model, originally intended to treat laboratory-scale experiments; it assumes negligible mixing of the two fluids. The other code (NFFLOW) was written to treat reservoir-scale engineering problems; It explicitly treats the flow through the fractures and allows for significant mixing of the fluids at the interface. Both codes treat the fractures as parallel plates, of different effective apertures. Results are presented for the composition profiles from both codes. Independent of the degree of fluid-mixing, the profiles from both models have a functional form identical to that for fractal viscous fingering (i.e., diffusion limited aggregation, DLA). The two codes that solve the equations for different models gave similar results; together they suggest that the injection of a low-viscosity fluid into large- scale fracture networks may be much more significantly affected by fractal fingering than previously illustrated.

  13. Data fusion for CD metrology: heterogeneous hybridization of scatterometry, CDSEM, and AFM data

    NASA Astrophysics Data System (ADS)

    Hazart, J.; Chesneau, N.; Evin, G.; Largent, A.; Derville, A.; Thérèse, R.; Bos, S.; Bouyssou, R.; Dezauzier, C.; Foucher, J.

    2014-04-01

    The manufacturing of next generation semiconductor devices forces metrology tool providers for an exceptional effort in order to meet the requirements for precision, accuracy and throughput stated in the ITRS. In the past years hybrid metrology (based on data fusion theories) has been investigated as a new methodology for advanced metrology [1][2][3]. This paper provides a new point of view of data fusion for metrology through some experiments and simulations. The techniques are presented concretely in terms of equations to be solved. The first point of view is High Level Fusion which is the use of simple numbers with their associated uncertainty postprocessed by tools. In this paper, it is divided into two stages: one for calibration to reach accuracy, the second to reach precision thanks to Bayesian Fusion. From our perspective, the first stage is mandatory before applying the second stage which is commonly presented [1]. However a reference metrology system is necessary for this fusion. So, precision can be improved if and only if the tools to be fused are perfectly matched at least for some parameters. We provide a methodology similar to a multidimensional TMU able to perform this matching exercise. It is demonstrated on a 28 nm node backend lithography case. The second point of view is Deep Level Fusion which works on the contrary with raw data and their combination. In the approach presented here, the analysis of each raw data is based on a parametric model and connections between the parameters of each tool. In order to allow OCD/SEM Deep Level Fusion, a SEM Compact Model derived from [4] has been developed and compared to AFM. As far as we know, this is the first time such techniques have been coupled at Deep Level. A numerical study on the case of a simple stack for lithography is performed. We show strict equivalence of Deep Level Fusion and High Level Fusion when tools are sensitive and models are perfect. When one of the tools can be considered as a reference and the second is biased, High Level Fusion is far superior to standard Deep Level Fusion. Otherwise, only the second stage of High Level Fusion is possible (Bayesian Fusion) and do not provide substantial advantage. Finally, when OCD is equipped with methods for bias detection [5], Deep Level Fusion outclasses the two-stage High Level Fusion and will benefit to the industry for most advanced nodes production.

  14. Supercomputer use in orthopaedic biomechanics research: focus on functional adaptation of bone.

    PubMed

    Hart, R T; Thongpreda, N; Van Buskirk, W C

    1988-01-01

    The authors describe two biomechanical analyses carried out using numerical methods. One is an analysis of the stress and strain in a human mandible, and the other analysis involves modeling the adaptive response of a sheep bone to mechanical loading. The computing environment required for the two types of analyses is discussed. It is shown that a simple stress analysis of a geometrically complex mandible can be accomplished using a minicomputer. However, more sophisticated analyses of the same model with dynamic loading or nonlinear materials would require supercomputer capabilities. A supercomputer is also required for modeling the adaptive response of living bone, even when simple geometric and material models are use.

  15. Commander Naval Air Forces (CNAF) Flight Hour Program: Budgeting and Execution Response to the Implementation of the Fleet Response Plan and OP-20 Pricing Model Changes

    DTIC Science & Technology

    2005-06-01

    seat ratio ( CSR ). The wartime CSR is the result of wartime manning levels divided by Primary Aircraft Authorized (PAA). The Aircrew Manning Factor...justifies the FHP.59 As ADM Mallon suggested, let us look at two simple examples of today’s best business practices with Starbucks and Southwest Airlines...focusing on efforts to achieve optimal efficiency in routine tasks of their operation. The Starbucks example involves redesigning ice scoops and

  16. Surface Energy and Mass Balance Model for Greenland Ice Sheet and Future Projections

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojian

    The Greenland Ice Sheet contains nearly 3 million cubic kilometers of glacial ice. If the entire ice sheet completely melted, sea level would raise by nearly 7 meters. There is thus considerable interest in monitoring the mass balance of the Greenland Ice Sheet. Each year, the ice sheet gains ice from snowfall and loses ice through iceberg calving and surface melting. In this thesis, we develop, validate and apply a physics based numerical model to estimate current and future surface mass balance of the Greenland Ice Sheet. The numerical model consists of a coupled surface energy balance and englacial model that is simple enough that it can be used for long time scale model runs, but unlike previous empirical parameterizations, has a physical basis. The surface energy balance model predicts ice sheet surface temperature and melt production. The englacial model predicts the evolution of temperature and meltwater within the ice sheet. These two models can be combined with estimates of precipitation (snowfall) to estimate the mass balance over the Greenland Ice Sheet. We first compare model performance with in-situ observations to demonstrate that the model works well. We next evaluate how predictions are degraded when we statistically downscale global climate data. We find that a simple, nearest neighbor interpolation scheme with a lapse rate correction is able to adequately reproduce melt patterns on the Greenland Ice Sheet. These results are comparable to those obtained using empirical Positive Degree Day (PDD) methods. Having validated the model, we next drove the ice sheet model using the suite of atmospheric model runs available through the CMIP5 atmospheric model inter-comparison, which in turn built upon the RCP 8.5 (business as usual) scenarios. From this exercise we predict how much surface melt production will increase in the coming century. This results in 4-10 cm sea level equivalent, depending on the CMIP5 models. Finally, we try to bound melt water production from CMIP5 data with the model by assuming that the Greenland Ice Sheet is covered in black carbon (lowering the albedo) and perpetually covered by optically thick clouds (increasing long wave radiation). This upper bound roughly triples surface meltwater production, resulting in 30 cm of sea level rise by 2100. These model estimates, combined with prior research suggesting an additional 40-100 cm of sea level rise associated with dynamical discharge, suggest that the Greenland Ice Sheet is poised to contribute significantly to sea level rise in the coming century.

  17. Low Reynolds number two-equation modeling of turbulent flows

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Shih, T.-H.

    1991-01-01

    A k-epsilon model that accounts for viscous and wall effects is presented. The proposed formulation does not contain the local wall distance thereby making very simple the application to complex geometries. The formulation is based on an existing k-epsilon model that proved to fit very well with the results of direct numerical simulation. The new form is compared with nine different two-equation models and with direct numerical simulation for a fully developed channel flow at Re = 3300. The simple flow configuration allows a comparison free from numerical inaccuracies. The computed results prove that few of the considered forms exhibit a satisfactory agreement with the channel flow data. The model shows an improvement with respect to the existing formulations.

  18. Teaching Imperfect Competition at the Principles Level.

    ERIC Educational Resources Information Center

    Weber, William V.; Highfill, Jannett K.

    1990-01-01

    Argues that, although most economics textbooks' explanations of imperfect competition may involve three to five models, the concept can be taught using a single, simple model. Uses several business/economic examples as illustrations. (DB)

  19. Simple potential model for interaction of dark particles with massive bodies

    NASA Astrophysics Data System (ADS)

    Takibayev, Nurgali

    2018-01-01

    A simple model for interaction of dark particles with matter based on resonance behavior in a three-body system is proposed. The model describes resonant amplification of effective interaction between two massive bodies at large distances between them. The phenomenon is explained by catalytic action of dark particles rescattering at a system of two heavy bodies which are understood here as the big stellar objects. Resonant amplification of the effective interaction between the two heavy bodies imitates the increase in their mass while their true gravitational mass remains unchanged. Such increased interaction leads to more pronounced gravitational lensing of bypassing light. It is shown that effective interaction between the heavy bodies is changed at larger distances and can transform into repulsive action.

  20. Backscattering and absorption coefficients for electrons: Solutions of invariant embedding transport equations using a method of convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, C.; Brizuela, H.; Heluani, S. P.

    2014-05-21

    The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less

  1. Magnified Effects of Changes in NIH Research Funding Levels.

    PubMed

    Larson, Richard C; Ghaffarzadegan, Navid; Diaz, Mauricio Gomez

    2012-12-01

    What happens within the university-based research enterprise when a federal funding agency abruptly changes research grant funding levels, up or down? We use simple difference equation models to show that an apparently modest increase or decrease in funding levels can have dramatic effects on researchers, graduate students, postdocs, and the overall research enterprise. The amplified effect is due to grants lasting for an extended period, thereby requiring the majority of funds available in one year to pay for grants awarded in previous years. We demonstrate the effect in various ways, using National Institutes of Health data for two situations: the historical doubling of research funding from 1998 to 2003 and the possible effects of "sequestration" in January 2013. We posit human responses to such sharp movements in funding levels and offer suggestions for amelioration.

  2. Magnified Effects of Changes in NIH Research Funding Levels

    PubMed Central

    Larson, Richard C.; Ghaffarzadegan, Navid; Diaz, Mauricio Gomez

    2013-01-01

    What happens within the university-based research enterprise when a federal funding agency abruptly changes research grant funding levels, up or down? We use simple difference equation models to show that an apparently modest increase or decrease in funding levels can have dramatic effects on researchers, graduate students, postdocs, and the overall research enterprise. The amplified effect is due to grants lasting for an extended period, thereby requiring the majority of funds available in one year to pay for grants awarded in previous years. We demonstrate the effect in various ways, using National Institutes of Health data for two situations: the historical doubling of research funding from 1998 to 2003 and the possible effects of “sequestration” in January 2013. We posit human responses to such sharp movements in funding levels and offer suggestions for amelioration. PMID:24489978

  3. Deriving dynamical models from paleoclimatic records: application to glacial millennial-scale climate variability.

    PubMed

    Kwasniok, Frank; Lohmann, Gerrit

    2009-12-01

    A method for systematically deriving simple nonlinear dynamical models from ice-core data is proposed. It offers a tool to integrate models and theories with paleoclimatic data. The method is based on the unscented Kalman filter, a nonlinear extension of the conventional Kalman filter. Here, we adopt the abstract conceptual model of stochastically driven motion in a potential that allows for two distinctly different states. The parameters of the model-the shape of the potential and the noise level-are estimated from a North Greenland ice-core record. For the glacial period from 70 to 20 ky before present, a potential is derived that is asymmetric and almost degenerate. There is a deep well corresponding to a cold stadial state and a very shallow well corresponding to a warm interstadial state.

  4. To Ask a Question, One Must Know Enough to Know What Is Not Known. Report No. 7802.

    ERIC Educational Resources Information Center

    Miyake, Naomi; Norman, Donald A.

    This study involved the manipulation of question-asking in a learning task. The hypothesis that learners should ask the most questions when their knowledge was well-matched to the level of presentation was tested, using two levels of background knowledge and two levels of difficulty of material to be learned. The more simple instructional…

  5. Limiting similarity of competitive species and demographic stochasticity

    NASA Astrophysics Data System (ADS)

    Zheng, Xiu-Deng; Deng, Ling-Ling; Qiang, Wei-Ya; Cressman, Ross; Tao, Yi

    2017-04-01

    The limiting similarity of competitive species and its relationship with the competitive exclusion principle is still one of the most important concepts in ecology. In the 1970s, May [R. M. May, Stability and Complexity in Model Ecosystems (Princeton University, Princeton, NJ, 1973)] developed a concise theoretical framework to investigate the limiting similarity of competitive species. His theoretical results show that no limiting similarity threshold of competitive species can be identified in the deterministic model system whereby species more similar than this threshold never coexist. Theoretically, for competitive species coexisting in an unvarying environment, deterministic interspecific interactions and demographic stochasticity can be considered two sides of a coin. To investigate how the "tension" between these two forces affects the coexistence of competing species, a simple two-species competitive system based only on May's model system is transformed into an equivalent replicator equation. The effect of demographic stochasticity on the system stability is measured by the expected drift of the Lyapunov function. Our main results show that the limiting similarity of competitive species should not be considered to be an absolute measure. Specifically, very similar competitive species should be able to coexist in an environment with a high productivity level but big differences between competitive species should be necessary in an ecosystem with a low productivity level.

  6. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations from a TMS Coil

    PubMed Central

    Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro

    2016-01-01

    Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221

  7. Cooling tower plume - model and experiment

    NASA Astrophysics Data System (ADS)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  8. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  9. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  10. Lorentz Trial Function for the Hydrogen Atom: A Simple, Elegant Exercise

    ERIC Educational Resources Information Center

    Sommerfeld, Thomas

    2011-01-01

    The quantum semester of a typical two-semester physical chemistry course is divided into two parts. The initial focus is on quantum mechanics and simple model systems for which the Schrodinger equation can be solved in closed form, but it then shifts in the second half to atoms and molecules, for which no closed solutions exist. The underlying…

  11. Exploratory reconstructability analysis of accident TBI data

    NASA Astrophysics Data System (ADS)

    Zwick, Martin; Carney, Nancy; Nettleton, Rosemary

    2018-02-01

    This paper describes the use of reconstructability analysis to perform a secondary study of traumatic brain injury data from automobile accidents. Neutral searches were done and their results displayed with a hypergraph. Directed searches, using both variable-based and state-based models, were applied to predict performance on two cognitive tests and one neurological test. Very simple state-based models gave large uncertainty reductions for all three DVs and sizeable improvements in percent correct for the two cognitive test DVs which were equally sampled. Conditional probability distributions for these models are easily visualized with simple decision trees. Confounding variables and counter-intuitive findings are also reported.

  12. Entropy of level-cut random Gaussian structures at different volume fractions

    NASA Astrophysics Data System (ADS)

    Marčelja, Stjepan

    2017-10-01

    Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.

  13. Selective Transient Cooling by Impulse Perturbations in a Simple Toy Model

    NASA Astrophysics Data System (ADS)

    Fabrizio, Michele

    2018-06-01

    We show in a simple exactly solvable toy model that a properly designed impulse perturbation can transiently cool down low-energy degrees of freedom at the expense of high-energy ones that heat up. The model consists of two infinite-range quantum Ising models: one, the high-energy sector, with a transverse field much bigger than the other, the low-energy sector. The finite-duration perturbation is a spin exchange that couples the two Ising models with an oscillating coupling strength. We find a cooling of the low-energy sector that is optimized by the oscillation frequency in resonance with the spin exchange excitation. After the perturbation is turned off, the Ising model with a low transverse field can even develop a spontaneous symmetry breaking despite being initially above the critical temperature.

  14. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  15. A mixture-energy-consistent six-equation two-phase numerical model for fluids with interfaces, cavitation and evaporation waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw

    2014-02-15

    We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less

  16. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  17. The use of simple inflow- and storage-based heuristics equations to represent reservoir behavior in California for investigating human impacts on the water cycle

    NASA Astrophysics Data System (ADS)

    Solander, K.; David, C. H.; Reager, J. T.; Famiglietti, J. S.

    2013-12-01

    The ability to reasonably replicate reservoir behavior in terms of storage and outflow is important for studying the potential human impacts on the terrestrial water cycle. Developing a simple method for this purpose could facilitate subsequent integration in a land surface or global climate model. This study attempts to simulate monthly reservoir outflow and storage using a simple, temporally-varying set of heuristics equations with input consisting of in situ records of reservoir inflow and storage. Equations of increasing complexity relative to the number of parameters involved were tested. Only two parameters were employed in the final equations used to predict outflow and storage in an attempt to best mimic seasonal reservoir behavior while still preserving model parsimony. California reservoirs were selected for model development due to the high level of data availability and intensity of water resource management in this region relative to other areas. Calibration was achieved using observations from eight major reservoirs representing approximately 41% of the 107 largest reservoirs in the state. Parameter optimization was accomplished using the minimum RMSE between observed and modeled storage and outflow as the main objective function. Initial results obtained for a multi-reservoir average of the correlation coefficient between observed and modeled storage (resp. outflow) is of 0.78 (resp. 0.75). These results combined with the simplicity of the equations being used show promise for integration into a land surface or a global climate model. This would be invaluable for evaluations of reservoir management impacts on the flow regime and associated ecosystems as well as on the climate at both regional and global scales.

  18. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  19. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  20. Including resonances in the multiperipheral model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, S.S.; Snider, D.R.; Thomas, G.H.

    1973-10-01

    A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)

  1. Assessing exposure to transformation products of soil-applied organic contaminants in surface water: comparison of model predictions and field data.

    PubMed

    Kern, Susanne; Singer, Heinz; Hollender, Juliane; Schwarzenbach, René P; Fenner, Kathrin

    2011-04-01

    Transformation products (TPs) of chemicals released to soil, for example, pesticides, are regularly detected in surface and groundwater with some TPs even dominating observed pesticide levels. Given the large number of TPs potentially formed in the environment, straightforward prioritization methods based on available data and simple, evaluative models are required to identify TPs with a high aquatic exposure potential. While different such methods exist, none of them has so far been systematically evaluated against field data. Using a dynamic multimedia, multispecies model for TP prioritization, we compared the predicted relative surface water exposure potential of pesticides and their TPs with experimental data for 16 pesticides and 46 TPs measured in a small river draining a Swiss agricultural catchment. Twenty TPs were determined quantitatively using solid-phase extraction liquid chromatography mass spectrometry (SPE-LC-MS/MS), whereas the remaining 26 TPs could only be detected qualitatively because of the lack of analytical reference standards. Accordingly, the two sets of TPs were used for quantitative and qualitative model evaluation, respectively. Quantitative comparison of predicted with measured surface water exposure ratios for 20 pairs of TPs and parent pesticides indicated agreement within a factor of 10, except for chloridazon-desphenyl and chloridazon-methyl-desphenyl. The latter two TPs were found to be present in elevated concentrations during baseflow conditions and in groundwater samples across Switzerland, pointing toward high concentrations in exfiltrating groundwater. A simple leaching relationship was shown to qualitatively agree with the observed baseflow concentrations and to thus be useful in identifying TPs for which the simple prioritization model might underestimate actual surface water concentrations. Application of the model to the 26 qualitatively analyzed TPs showed that most of those TPs categorized as exhibiting a high aquatic exposure potential could be confirmed to be present in the majority of water samples investigated. On the basis of these results, we propose a generally applicable, model-based approach to identify those TPs of soil-applied organic contaminants that exhibit a high aquatic exposure potential to prioritize them for higher-tier, experimental investigations.

  2. Continuous liquid level detection based on two parallel plastic optical fibers in a helical structure

    NASA Astrophysics Data System (ADS)

    Zhang, Yingzi; Hou, Yulong; Zhang, Yanjun; Hu, Yanjun; Zhang, Liang; Gao, Xiaolong; Zhang, Huixin; Liu, Wenyi

    2018-02-01

    A simple and low-cost continuous liquid-level sensor based on two parallel plastic optical fibers (POFs) in a helical structure is presented. The change in the liquid level is determined by measuring the side-coupling power in the passive fiber. The side-coupling ratio is increased by just filling the gap between the two POFs with ultraviolet-curable optical cement, making the proposed sensor competitive. The experimental results show that the side-coupling power declines as the liquid level rises. The sensitivity and the measurement range are flexible and affected by the geometric parameters of the helical structure. A higher sensitivity of 0.0208 μW/mm is acquired for a smaller curvature radius of 5 mm, and the measurement range can be expanded to 120 mm by enlarging the screw pitch to 40 mm. In addition, the reversibility and temperature dependence are studied. The proposed sensor is a cost-effective solution offering the advantages of a simple fabrication process, good reversibility, and compensable temperature dependence.

  3. Simple models for rope substructure mechanics: application to electro-mechanical lifts

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Kaczmarczyk, S.

    2016-05-01

    Mechanical systems modelled as rigid mass elements connected by tensioned slender structural members such as ropes and cables represent quite common substructures used in lift engineering and hoisting applications. Special interest is devoted by engineers and researchers to the vibratory response of such systems for optimum performance and durability. This paper presents simplified models that can be employed to determine the natural frequencies of systems having substructures of two rigid masses constrained by tensioned rope/cable elements. The exact solution for free un-damped longitudinal displacement response is discussed in the context of simple two-degree-of-freedom models. The results are compared and the influence of characteristics parameters such as the ratio of the average mass of the two rigid masses with respect to the rope mass and the deviation ratio of the two rigid masses with respect to the average mass is analyzed. This analysis gives criteria for the application of such simplified models in complex elevator and hoisting system configurations.

  4. Division of Attention Relative to Response Between Attended and Unattended Stimuli.

    ERIC Educational Resources Information Center

    Kantowitz, Barry H.

    Research was conducted to investigate two general classes of human attention models, early-selection models which claim that attentional selecting precedes memory and meaning extraction mechanisms, and late-selection models which posit the reverse. This research involved two components: (1) the development of simple, efficient, computer-oriented…

  5. Languages, communication potential and generalized trust in Sub-Saharan Africa: evidence based on the Afrobarometer Survey.

    PubMed

    Buzasi, Katalin

    2015-01-01

    The goal of this study is to investigate whether speaking other than home languages in Sub-Saharan Africa promotes generalized trust. Based on various psychological and economic theories, a simple model is provided to illustrate how languages might shape trust through various channels. Relying on data from the Afrobarometer Project, which provides information on home and additional languages, the Index of Communication Potential (ICP) is introduced to capture the linguistic situation in the 20 sample countries. The ICP, which can be computed at any desired level of aggregation, refers to the probability that an individual can communicate with a randomly selected person in the society based on common languages. The estimated two-level hierarchical models show that, however, individual level communication potential does not seem to impact trust formation, but living in an area with higher average communication potential increases the chance of exhibiting higher trust toward unknown people. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Turkish Students' Conceptions about the Simple Electric Circuits

    ERIC Educational Resources Information Center

    Cepni, Salih; Keles, Esra

    2006-01-01

    In this study, the Turkish students' understanding level of electric circuits consisting of two bulbs and one battery was investigated by using open-ended questions. Two-hundred fifty students, whose ages range from 11 to 22, were chosen from five different groups at primary, secondary and university levels in Trabzon in Turkey. In analyzing…

  7. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  8. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    PubMed

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Sensitivity studies and a simple ozone perturbation experiment with a truncated two-dimensional model of the stratosphere

    NASA Technical Reports Server (NTRS)

    Stordal, Frode; Garcia, Rolando R.

    1987-01-01

    The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.

  10. Developing expressed sequence tag libraries and the discovery of simple sequence repeat markers for two species of raspberry (Rubus L.)

    USDA-ARS?s Scientific Manuscript database

    Background: Due to a relatively high level of codominant inheritance and transferability within and among taxonomic groups, simple sequence repeat (SSR) markers are important elements in comparative mapping and delineation of genomic regions associated with traits of economic importance. Expressed S...

  11. The Thin Border between Light and Shadow

    ERIC Educational Resources Information Center

    Guglielmino, M.; Gratton, L. M.; Oss, S.

    2010-01-01

    We propose a simple, direct estimate of the Sun's diameter based on penumbra observation and measurement in a two-level approach, the first for middle-school pupils and making use of simple geometrical arguments, the second more appropriate to high-school students and based on a slightly more sophisticated approach. (Contains 5 figures.)

  12. Groundwater modelling in decision support: reflections on a unified conceptual framework

    NASA Astrophysics Data System (ADS)

    Doherty, John; Simmons, Craig T.

    2013-11-01

    Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.

  13. A simple parameter can switch between different weak-noise-induced phenomena in a simple neuron model

    NASA Astrophysics Data System (ADS)

    Yamakou, Marius E.; Jost, Jürgen

    2017-10-01

    In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.

  14. Effect of Ground Patterns Size on FM-Band Cross-Talks between Two Parallel Signal Traces of Printed Circuit Boards for Vehicles

    NASA Astrophysics Data System (ADS)

    Iida, Michihira; Maeno, Tsuyoshi; Fujiwara, Osamu

    It is well known that electromagnetic disturbances in vehicle-mounted radios are mainly caused by conducted noise currents flowing through wiring-harnesses from vehicle-mounted printed circuit boards (PCBs) with common ground patterns containing slits. To suppress the noise currents outflow from PCBs of these kinds, we previously measured noise currents outflow from simple two-layer PCBs having two parallel signal traces and different ground patterns with/without slits to reveal that making slits with open ends on the ground patterns in parallel with the traces can reduce the conducted noise currents. In the present study, with FDTD simulation, we investigated reduction effects of ground patterns size on the FM-band cross-talk noise levels between two parallel signal traces, by using four types of simple PCB models having different ground patterns formed in different numbers but containing the same planar dimension slits parallel to the traces, in addition to two types of PCB models with different ground patterns divided into two parts parallel to the traces. As a result, we found that the cross-talk noise currents for the above six types of PCBs decrease by 6.9-8.5dB compared to the PCB which has a plain ground with no slits. From this study, we got the finding that the contributing factor for the above mentioned cross-talk reduction relies on the reduction of mutual inductance between the two parallel traces. In addition, in case of this study, it is interesting to note that the noise currents outflow from PCBs can rather be suppressed when the size of the return ground of each signal trace is small.

  15. The Yes-No Question Answering System and Statement Verification.

    ERIC Educational Resources Information Center

    Akiyama, M. Michael; And Others

    1979-01-01

    Two experiments investigated the relationship of verification to the answering of yes-no questions. Subjects verified simple statements or answered simple questions. Various proposals concerning the relative difficulty of answering questions and verifying statements were considered, and a model was proposed. (SW)

  16. Modeling of the merging of two colliding field reversed configuration plasmoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu

    2016-06-15

    The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less

  17. Effect of different levels of rapidly degradable carbohydrates calculated by a simple rumen model on performance of lactating dairy cows.

    PubMed

    Doorenbos, J; Martín-Tereso, J; Dijkstra, J; van Laar, H

    2017-07-01

    Aggregating rumen degradation characteristics of different carbohydrate components into the term modeled rapidly degradable carbohydrates (mRDC) can simplify diet formulation by accounting for differences in rate and extent of carbohydrate degradation within and between feedstuffs. This study sought to evaluate responses of lactating dairy cows to diets formulated with increasing levels of mRDC, keeping the supply of other nutrients as constant as possible. The mRDC content of feedstuffs was calculated based on a simple rumen model including soluble, washable, and nonwashable but potentially degradable fractions, as well as the fractional degradation and passage rates, of sugar, starch, neutral detergent fiber, and other carbohydrates. The mRDC term effectively represents the total amount of carbohydrates degraded in the rumen within 2 h after ingestion. Fifty-two lactating Holstein cows (of which 4 were rumen fistulated) were assigned to 4 treatments in a 4 × 4 Latin square design. Treatments were fed as a total mixed ration consisting of 25.4% corn silage, 23.1% grass silage, 11.6% grass hay, and 39.9% concentrate on a dry matter basis. Differences in mRDC were created by exchanging nonforage neutral detergent fiber-rich ingredients (mainly sugar beet pulp) with starch-rich ingredients (mainly wheat) and by exchanging corn (slowly degradable starch) with wheat (rapidly degradable starch) in the concentrate, resulting in 4 treatments that varied in dietary mRDC level of 167, 181, 194, or 208 g/kg of dry matter. Level of mRDC did not affect dry matter intake. Fat- and protein-corrected milk production and milk fat and lactose yield were greatest at 181 mRDC and decreased with further increases in mRDC. Milk protein yield and concentration increased with increasing mRDC level. Mean rumen pH and diurnal variation in ruminal pH did not differ between treatments. Total daily meal time and number of visits per meal were smaller at 181 and 194 mRDC. Despite milk production responses, increasing dietary mRDC levels, while maintaining net energy and intestinal digestible protein as well as other nutrients at similar levels, did not influence rumen pH parameter estimates and had minor effects on feeding behavior. These results indicate that aggregating rapidly degradable carbohydrate content into one term may be a simple way to further improve predictability of production responses in practical diet formulation for lactating dairy cows. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Levels of detail analysis of microwave scattering from human head models for brain stroke detection

    PubMed Central

    2017-01-01

    In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115

  19. Transition from the adiabatic to the sudden limit in core-level photoemission: A model study of a localized system

    NASA Astrophysics Data System (ADS)

    Lee, J. D.; Gunnarsson, O.; Hedin, L.

    1999-09-01

    We consider core electron photoemission in a localized system, where there is a charge transfer excitation. Examples are transition metal and rare earth compounds, chemisorption systems, and high-Tc compounds. The system is modeled by three electron levels, one core level, and two outer levels. In the initital state the core level and one outer level is filled (a spinless two-electron problem). This model system is embedded in a solid state environment, and the implications of our model system results for solid state photoemission are discussed. When the core hole is created, the more localized outer level (d) is pulled below the less localized level (L). The spectrum has a leading peak corresponding to a charge transfer between L and d (``shakedown''), and a satellite corresponding to no charge transfer. The model has a Coulomb interaction between these levels and the continuum states into which the core electron is emitted. The model is simple enough to allow an exact numerical solution, and with a separable potential an analytic solution. Analytic results are also obtained in lowest order perturbation theory, and in the high-energy limit of the semiclassical approximation. We calculate the ratio r(ω) between the weights of the satellite and the main peak as a function of the photon energy ω. The transition from the adiabatic to the sudden limit is found to take place for quite small kinetic energies of the photoelectron. For such small energies, the variation of the dipole matrix elements is substantial and described by the energy scale E~d. Without the coupling to the photoelectron, the corresponding ratio r0(ω) shows a smooth turn-on of the satellite intensity, due to the turn on of the dipole matrix element. The characteristic energy scales are E~d and the satellite excitation energy δE. When the interaction potential with the continuum states is introduced an energy scale E~s=1/(2R~2s) enters, where R~s is a length scale of the interaction (scattering) potential. At threshold there is typically a (weak) constructive interference between intrinsic and extrinsic contributions, and the ratio r(ω)/r0(ω) is larger than its limiting value for large ω. The interference becomes small or weakly destructive for photoelectron energies of the order E~s. For larger photoelectron energies r(ω)/r0(ω) therefore typically has a weak undershoot. If this undershoot is neglected, r(ω)/r0(ω) reaches its limiting value on the energy scale E~s for the parameter range considered here. In a ``shake-up'' scenario, where the two outer levels do not cross as the core hole is created, we instead find that r(ω)/r0(ω) is typically reduced for small ω by interference effects, as in the case of plasmon excitation. Even for this shake-down case, however, the results are very different from those for a simple metal, where plasmons dominate the picture. In particular, the adiabatic to sudden transition takes place at much lower energies in the case of a localized excitation. The reasons for the differences are briefly discussed.

  20. A consistent hierarchy of generalized kinetic equation approximations to the master equation applied to surface catalysis.

    PubMed

    Herschlag, Gregory J; Mitran, Sorin; Lin, Guang

    2015-06-21

    We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.

  1. Bistatic scattering from submerged unexploded ordnance lying on a sediment.

    PubMed

    Bucaro, J A; Simpson, H; Kraus, L; Dragonette, L R; Yoder, T; Houston, B H

    2009-11-01

    The broadband bistatic target strengths (TSs) of two submerged unexploded ordnance (UXO) targets have been measured in the NRL sediment pool facility. The targets-a 5 in. rocket and a 155 mm projectile-were among the targets whose monostatic TSs were measured and reported previously by the authors. Bistatic TS measurements were made for 0 degrees (target front) and 90 degrees (target side) incident source directions, and include both backscattered and forward scattered echo angles over a complete 360 degrees with the targets placed proud of the sediment surface. For the two source angles used, each target exhibits two strong highlights: a backscattered specular-like echo and a forward scattered response. The TS levels of the former are shown to agree reasonably well with predictions, based on scattering from rigid disks and cylinders, while the levels of the latter with predictions from radar cross section models, based on simple geometric optics appropriately modified. The bistatic TS levels observed for the proud case provide comparable or higher levels of broadband TS relative to free-field monostatic measurements. It is concluded that access to bistatic echo information in operations aimed at detecting submerged UXO targets could provide an important capability.

  2. Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.

    2009-01-01

    Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.

  3. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  4. A new dual-collimation batch reactor for determination of ultraviolet inactivation rate constants for microorganisms in aqueous suspensions

    PubMed Central

    Martin, Stephen B.; Schauer, Elizabeth S.; Blum, David H.; Kremer, Paul A.; Bahnfleth, William P.; Freihaut, James D.

    2017-01-01

    We developed, characterized, and tested a new dual-collimation aqueous UV reactor to improve the accuracy and consistency of aqueous k-value determinations. This new system is unique because it collimates UV energy from a single lamp in two opposite directions. The design provides two distinct advantages over traditional single-collimation systems: 1) real-time UV dose (fluence) determination; and 2) simple actinometric determination of a reactor factor that relates measured irradiance levels to actual irradiance levels experienced by the microbial suspension. This reactor factor replaces three of the four typical correction factors required for single-collimation reactors. Using this dual-collimation reactor, Bacillus subtilis spores demonstrated inactivation following the classic multi-hit model with k = 0.1471 cm2/mJ (with 95% confidence bounds of 0.1426 to 0.1516). PMID:27498232

  5. Comparison of different approaches of modelling in a masonry building

    NASA Astrophysics Data System (ADS)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  6. Methodology for estimating human perception to tremors in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  7. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Brown, Judith Alice

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  8. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE PAGES

    Bishop, Joseph E.; Brown, Judith Alice

    2018-06-15

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  9. Coupled two-dimensional edge plasma and neutral gas modeling of tokamak scrape-off-layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maingi, Rajesh

    1992-08-01

    The objective of this study is to devise a detailed description of the tokamak scrape-off-layer (SOL), which includes the best available models of both the plasma and neutral species and the strong coupling between the two in many SOL regimes. A good estimate of both particle flux and heat flux profiles at the limiter/divertor target plates is desired. Peak heat flux is one of the limiting factors in determining the survival probability of plasma-facing-components at high power levels. Plate particle flux affects the neutral flux to the pump, which determines the particle exhaust rate. A technique which couples a two-dimensionalmore » (2-D) plasma and a 2-D neutral transport code has been developed (coupled code technique), but this procedure requires large amounts of computer time. Relevant physics has been added to an existing two-neutral-species model which takes the SOL plasma/neutral coupling into account in a simple manner (molecular physics model), and this model is compared with the coupled code technique mentioned above. The molecular physics model is benchmarked against experimental data from a divertor tokamak (DIII-D), and a similar model (single-species model) is benchmarked against data from a pump-limiter tokamak (Tore Supra). The models are then used to examine two key issues: free-streaming-limits (ion energy conduction and momentum flux) and the effects of the non-orthogonal geometry of magnetic flux surfaces and target plates on edge plasma parameter profiles.« less

  10. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  11. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  12. Two-level Schwartz methods for nonconforming finite elements and discontinuous coefficients

    NASA Technical Reports Server (NTRS)

    Sarkis, Marcus

    1993-01-01

    Two-level domain decomposition methods are developed for a simple nonconforming approximation of second order elliptic problems. A bound is established for the condition number of these iterative methods, which grows only logarithmically with the number of degrees of freedom in each subregion. This bound holds for two and three dimensions and is independent of jumps in the value of the coefficients.

  13. Dynamical systems, attractors, and neural circuits.

    PubMed

    Miller, Paul

    2016-01-01

    Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.

  14. BRST Exactness of Stress-Energy Tensors

    NASA Astrophysics Data System (ADS)

    Miyata, Hideo; Sugimoto, Hiroshi

    BRST commutators in the topological conformal field theories obtained by twisting N=2 theories are evaluated explicitly. By our systematic calculations of the multiple integrals which contain screening operators, the BRST exactness of the twisted stress-energy tensors is deduced for classical simple Lie algebras and general level k. We can see that the paths of integrations do not affect the result, and further, the N=2 coset theories are obtained by deleting two simple roots with Kac-label 1 from the extended Dynkin diagram; in other words, by not performing the integrations over the variables corresponding to the two simple roots of Kac-Moody algebras. It is also shown that a series of N=1 theories are generated in the same way by deleting one simple root with Kac-label 2.

  15. Linking a dermal permeation and an inhalation model to a simple pharmacokinetic model to study airborne exposure to di(n-butyl) phthalate.

    PubMed

    Lorber, Matthew; Weschler, Charles J; Morrison, Glenn; Bekö, Gabriel; Gong, Mengyan; Koch, Holger M; Salthammer, Tunga; Schripp, Tobias; Toftum, Jørn; Clausen, Geo

    2017-11-01

    Six males clad only in shorts were exposed to high levels of airborne di(n-butyl) phthalate (DnBP) and diethyl phthalate (DEP) in chamber experiments conducted in 2014. In two 6 h sessions, the subjects were exposed only dermally while breathing clean air from a hood, and both dermally and via inhalation when exposed without a hood. Full urine samples were taken before, during, and for 48 h after leaving the chamber and measured for key DnBP and DEP metabolites. The data clearly demonstrated high levels of DnBP and DEP metabolite excretions while in the chamber and during the first 24 h once leaving the chamber under both conditions. The data for DnBP were used in a modeling exercise linking dose models for inhalation and transdermal permeation with a simple pharmacokinetic model that predicted timing and mass of metabolite excretions. These models were developed and calibrated independent of these experiments. Tests included modeling of the "hood-on" (transdermal penetration only), "hood-off" (both inhalation and transdermal) scenarios, and a derived "inhalation-only" scenario. Results showed that the linked model tended to duplicate the pattern of excretion with regard to timing of peaks, decline of concentrations over time, and the ratio of DnBP metabolites. However, the transdermal model tended to overpredict penetration of DnBP such that predictions of metabolite excretions were between 1.1 and 4.5 times higher than the cumulative excretion of DnBP metabolites over the 54 h of the simulation. A similar overprediction was not seen for the "inhalation-only" simulations. Possible explanations and model refinements for these overpredictions are discussed. In a demonstration of the linked model designed to characterize general population exposures to typical airborne indoor concentrations of DnBP in the United States, it was estimated that up to one-quarter of total exposures could be due to inhalation and dermal uptake.

  16. Regular and Chaotic Quantum Dynamics of Two-Level Atoms in a Selfconsistent Radiation Field

    NASA Technical Reports Server (NTRS)

    Konkov, L. E.; Prants, S. V.

    1996-01-01

    Dynamics of two-level atoms interacting with their own radiation field in a single-mode high-quality resonator is considered. The dynamical system consists of two second-order differential equations, one for the atomic SU(2) dynamical-group parameter and another for the field strength. With the help of the maximal Lyapunov exponent for this set, we numerically investigate transitions from regularity to deterministic quantum chaos in such a simple model. Increasing the collective coupling constant b is identical with 8(pi)N(sub 0)(d(exp 2))/hw, we observed for initially unexcited atoms a usual sharp transition to chaos at b(sub c) approx. equal to 1. If we take the dimensionless individual Rabi frequency a = Omega/2w as a control parameter, then a sequence of order-to-chaos transitions has been observed starting with the critical value a(sub c) approx. equal to 0.25 at the same initial conditions.

  17. Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment

    NASA Astrophysics Data System (ADS)

    Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz

    2017-04-01

    Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.

  18. Development of the Concept of Energy Conservation using Simple Experiments for Grade 10 Students

    NASA Astrophysics Data System (ADS)

    Rachniyom, S.; Toedtanya, K.; Wuttiprom, S.

    2017-09-01

    The purpose of this research was to develop students’ concept of and retention rate in relation to energy conservation. Activities included simple and easy experiments that considered energy transformation from potential to kinetic energy. The participants were 30 purposively selected grade 10 students in the second semester of the 2016 academic year. The research tools consisted of learning lesson plans and a learning achievement test. Results showed that the experiments worked well and were appropriate as learning activities. The students’ achievement scores significantly increased at the statistical level of 05, the students’ retention rates were at a high level, and learning behaviour was at a good level. These simple experiments allowed students to learn to demonstrate to their peers and encouraged them to use familiar models to explain phenomena in daily life.

  19. Direct power comparisons between simple LOD scores and NPL scores for linkage analysis in complex diseases.

    PubMed

    Abreu, P C; Greenberg, D A; Hodge, S E

    1999-09-01

    Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.

  20. Fault diagnosis and fault-tolerant finite control set-model predictive control of a multiphase voltage-source inverter supplying BLDC motor.

    PubMed

    Salehifar, Mehdi; Moreno-Equilaz, Manuel

    2016-01-01

    Due to its fault tolerance, a multiphase brushless direct current (BLDC) motor can meet high reliability demand for application in electric vehicles. The voltage-source inverter (VSI) supplying the motor is subjected to open circuit faults. Therefore, it is necessary to design a fault-tolerant (FT) control algorithm with an embedded fault diagnosis (FD) block. In this paper, finite control set-model predictive control (FCS-MPC) is developed to implement the fault-tolerant control algorithm of a five-phase BLDC motor. The developed control method is fast, simple, and flexible. A FD method based on available information from the control block is proposed; this method is simple, robust to common transients in motor and able to localize multiple open circuit faults. The proposed FD and FT control algorithm are embedded in a five-phase BLDC motor drive. In order to validate the theory presented, simulation and experimental results are conducted on a five-phase two-level VSI supplying a five-phase BLDC motor. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  1. A simple method for assessment of muscle force, velocity, and power producing capacities from functional movement tasks.

    PubMed

    Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan

    2017-07-01

    A range of force (F) and velocity (V) data obtained from functional movement tasks (e.g., running, jumping, throwing, lifting, cycling) performed under variety of external loads have typically revealed strong and approximately linear F-V relationships. The regression model parameters reveal the maximum F (F-intercept), V (V-intercept), and power (P) producing capacities of the tested muscles. The aim of the present study was to evaluate the level of agreement between the routinely used "multiple-load model" and a simple "two-load model" based on direct assessment of the F-V relationship from only 2 external loads applied. Twelve participants were tested on the maximum performance vertical jumps, cycling, bench press throws, and bench pull performed against a variety of different loads. All 4 tested tasks revealed both exceptionally strong relationships between the parameters of the 2 models (median R = 0.98) and a lack of meaningful differences between their magnitudes (fixed bias below 3.4%). Therefore, addition of another load to the standard tests of various functional tasks typically conducted under a single set of mechanical conditions could allow for the assessment of the muscle mechanical properties such as the muscle F, V, and P producing capacities.

  2. Emulation of the MBM-MEDUSA model: exploring the sea level and the basin-to-shelf transfer influence on the system dynamics

    NASA Astrophysics Data System (ADS)

    Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy

    2013-04-01

    Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.

  3. Toward a molecular programming language for algorithmic self-assembly

    NASA Astrophysics Data System (ADS)

    Patitz, Matthew John

    Self-assembly is the process whereby relatively simple components autonomously combine to form more complex objects. Nature exhibits self-assembly to form everything from microscopic crystals to living cells to galaxies. With a desire to both form increasingly sophisticated products and to understand the basic components of living systems, scientists have developed and studied artificial self-assembling systems. One such framework is the Tile Assembly Model introduced by Erik Winfree in 1998. In this model, simple two-dimensional square 'tiles' are designed so that they self-assemble into desired shapes. The work in this thesis consists of a series of results which build toward the future goal of designing an abstracted, high-level programming language for designing the molecular components of self-assembling systems which can perform powerful computations and form into intricate structures. The first two sets of results demonstrate self-assembling systems which perform infinite series of computations that characterize computably enumerable and decidable languages, and exhibit tools for algorithmically generating the necessary sets of tiles. In the next chapter, methods for generating tile sets which self-assemble into complicated shapes, namely a class of discrete self-similar fractal structures, are presented. Next, a software package for graphically designing tile sets, simulating their self-assembly, and debugging designed systems is discussed. Finally, a high-level programming language which abstracts much of the complexity and tedium of designing such systems, while preventing many of the common errors, is presented. The summation of this body of work presents a broad coverage of the spectrum of desired outputs from artificial self-assembling systems and a progression in the sophistication of tools used to design them. By creating a broader and deeper set of modular tools for designing self-assembling systems, we hope to increase the complexity which is attainable. These tools provide a solid foundation for future work in both the Tile Assembly Model and explorations into more advanced models.

  4. Glassy behaviour in simple kinetically constrained models: topological networks, lattice analogues and annihilation-diffusion

    NASA Astrophysics Data System (ADS)

    Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.

    2002-02-01

    We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.

  5. Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.

    PubMed

    Lv, Jie; Havlak, Paul; Putnam, Nicholas H

    2011-10-05

    Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.

  6. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  7. Study of the model of hole superconductivity in multiple band cases and its application to transition metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X.Q.

    1992-01-01

    The authors have studied a simple model consisting of a chain of atoms with two atoms per unit cell. This model develops two bands when the inter-cell and intra-cell hopping amplitudes are different. They have found that superconductivity predominantly occurs when the Fermi level is close to the top of the upper band where the wavefunction has antibonding feature both inside the unit cell and between unit cells. Superconductivity occurs only in a restricted parameter range when the Fermi level is close to the top of the lower band because of the repulsive interaction within the unit cell. They findmore » that pair expectation values that 'mix' carriers of both bands can exist when interband interactions other than V12 of Suhl et al are present. But the magnitude of the 'mixed pairs' order parameters is much smaller than that of the intra-band pairs. The V12 of Suhl et al is the most important interband interaction that gives rise to the main features of a two-band model: a single transition temperature and two different gaps. They have used the model of hole superconductivity to study the variation of T(sub c) among transition metal series--the Matthias rules. They have found that the observed T(sub c)'s are consistent with superconductivity of a metal with multiple bands at the Fermi level being caused by the single band with strongest antibonding character at the Fermi level. When the Fermi level is the lower part of a band, there is no T(sub c). As the band is gradually filled, T(sub c) rises, passes through a maximum, then drops to zero when the band is full. This characteristic feature is independent of any fine structure of the band. The position of the peak and the width of the peak are correlated. Quantitative agreement with the experimental results is obtained by choosing parameters of onsite Coulomb interaction U, modulated hopping term Delta-t, and nearest neighbor repulsion V to fit the magnitude of T(sub c) and the positions of experimental peaks.« less

  8. SIMPLE MODEL OF ICE SEGREGATION USING AN ANALYTIC FUNCTION TO MODEL HEAT AND SOIL-WATER FLOW.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1984-01-01

    This paper reports on the development of a simple two-dimensional model of coupled heat and soil-water flow in freezing or thawing soil. The model also estimates ice-segregation (frost-heave) evolution. Ice segregation in soil results from water drawn into a freezing zone by hydraulic gradients created by the freezing of soil-water. Thus, with a favorable balance between the rate of heat extraction and the rate of water transport to a freezing zone, segregated ice lenses may form.

  9. Effects of chirp on two-dimensional Fourier transform electronic spectra.

    PubMed

    Tekavec, Patrick F; Myers, Jeffrey A; Lewis, Kristin L M; Fuller, Franklin D; Ogilvie, Jennifer P

    2010-05-24

    We examine the effect that pulse chirp has on the shape of two- dimensional electronic spectra through calculations and experiments. For the calculations we use a model two electronic level system with a solvent interaction represented by a simple Gaussian correlation function and compare the resulting spectra to experiments carried out on an organic dye molecule (Rhodamine 800). Both calculations and experiments show that distortions due to chirp are most significant when the pulses used in the experiment have different amounts of chirp, introducing peak shape asymmetry that could be interpreted as spectrally dependent relaxation. When all pulses have similar chirp the distortions are reduced but still affect the anti-diagonal symmetry of the peak shapes and introduce negative features that could be interpreted as excited state absorption.

  10. Challenges for modeling global gene regulatory networks during development: insights from Drosophila.

    PubMed

    Wilczynski, Bartek; Furlong, Eileen E M

    2010-04-15

    Development is regulated by dynamic patterns of gene expression, which are orchestrated through the action of complex gene regulatory networks (GRNs). Substantial progress has been made in modeling transcriptional regulation in recent years, including qualitative "coarse-grain" models operating at the gene level to very "fine-grain" quantitative models operating at the biophysical "transcription factor-DNA level". Recent advances in genome-wide studies have revealed an enormous increase in the size and complexity or GRNs. Even relatively simple developmental processes can involve hundreds of regulatory molecules, with extensive interconnectivity and cooperative regulation. This leads to an explosion in the number of regulatory functions, effectively impeding Boolean-based qualitative modeling approaches. At the same time, the lack of information on the biophysical properties for the majority of transcription factors within a global network restricts quantitative approaches. In this review, we explore the current challenges in moving from modeling medium scale well-characterized networks to more poorly characterized global networks. We suggest to integrate coarse- and find-grain approaches to model gene regulatory networks in cis. We focus on two very well-studied examples from Drosophila, which likely represent typical developmental regulatory modules across metazoans. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  11. Nonlinear and threshold-dominated runoff generation controls DOC export in a small peat catchment

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Broder, T.; Biester, H.

    2017-03-01

    We used a relatively simple two-layer, coupled hydrology-biogeochemistry model to simultaneously simulate streamflow and stream dissolved organic carbon (DOC) concentrations in a small lead and arsenic contaminated upland peat catchment in northwestern Germany. The model procedure was informed by an initial data mining analysis, in combination with regression relationships of discharge, DOC, and element export. We assessed the internal model DOC processing based on stream DOC hysteresis patterns and 3-hourly time step groundwater level and soil DOC data for two consecutive summer periods in 2013 and 2014. The parsimonious model (i.e., few calibrated parameters) showed the importance of nonlinear and rapid near-surface runoff generation mechanisms that caused around 60% of simulated DOC load. The total load was high even though these pathways were only activated during storm events on average 30% of the monitoring time—as also shown by the experimental data. Overall, the drier period 2013 resulted in increased nonlinearity but exported less DOC (115 kg C ha-1 yr-1 ± 11 kg C ha-1 yr-1) compared to the equivalent but wetter period in 2014 (189 kg C ha-1 yr-1 ± 38 kg C ha-1 yr-1). The exceedance of a critical water table threshold (-10 cm) triggered a rapid near-surface runoff response with associated higher DOC transport connecting all available DOC pools and subsequent dilution. We conclude that the combination of detailed experimental work with relatively simple, coupled hydrology-biogeochemistry models not only allowed the model to be internally constrained but also provided important insight into how DOC and tightly coupled pollutants or trace elements are mobilized.

  12. Modeling human diseases with induced pluripotent stem cells: from 2D to 3D and beyond.

    PubMed

    Liu, Chun; Oikonomopoulos, Angelos; Sayed, Nazish; Wu, Joseph C

    2018-03-08

    The advent of human induced pluripotent stem cells (iPSCs) presents unprecedented opportunities to model human diseases. Differentiated cells derived from iPSCs in two-dimensional (2D) monolayers have proven to be a relatively simple tool for exploring disease pathogenesis and underlying mechanisms. In this Spotlight article, we discuss the progress and limitations of the current 2D iPSC disease-modeling platform, as well as recent advancements in the development of human iPSC models that mimic in vivo tissues and organs at the three-dimensional (3D) level. Recent bioengineering approaches have begun to combine different 3D organoid types into a single '4D multi-organ system'. We summarize the advantages of this approach and speculate on the future role of 4D multi-organ systems in human disease modeling. © 2018. Published by The Company of Biologists Ltd.

  13. Interaction of a sodium ion with the water liquid-vapor interface

    NASA Technical Reports Server (NTRS)

    Wilson, M. A.; Pohorille, A.; Pratt, L. R.; MacElroy, R. D. (Principal Investigator)

    1989-01-01

    Molecular dynamics results are presented for the density profile of a sodium ion near the water liquid-vapor interface at 320 K. These results are compared with the predictions of a simple dielectric model for the interaction of a monovalent ion with this interface. The interfacial region described by the model profile is too narrow and the profile decreases too abruptly near the solution interface. Thus, the simple model does not provide a satisfactory description of the molecular dynamics results for ion positions within two molecular diameters from the solution interface where appreciable ion concentrations are observed. These results suggest that surfaces associated with dielectric models of ionic processes at aqueous solution interfaces should be located at least two molecular diameters inside the liquid phase. A free energy expense of about 2 kcal/mol is required to move the ion within two molecular layers of the free water liquid-vapor interface.

  14. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  15. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  16. Automation effects in a multiloop manual control system

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1986-01-01

    An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.

  17. Single Canonical Model of Reflexive Memory and Spatial Attention

    PubMed Central

    Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.

    2015-01-01

    Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949

  18. Energy pumping analysis of skating motion in a half pipe and on a level surface

    NASA Astrophysics Data System (ADS)

    Feng, Z. C.; Xin, Ming

    2015-01-01

    In this paper, an energy pumping mechanism for locomotion is analysed. The pumping is accomplished by exerting forces perpendicular to the direction of motion. The paper attempts to demonstrate an interesting application of the classical mechanics to two sporting events: a person skating in a half pipe and a person travelling on a level surface on a skateboard. The equations of motion based on simplified mechanical models are derived using the Lagrange mechanics. The energy-pumping phenomenon is revealed through numerical simulations with simple pumping actions. The result presented in this paper can be used as an interesting class project in undergraduate mechanics or physics courses. It also motivates potential new applications of energy pumping in many engineering fields.

  19. Multi-level characterization of balanced inhibitory-excitatory cortical neuron network derived from human pluripotent stem cells.

    PubMed

    Nadadhur, Aishwarya G; Emperador Melero, Javier; Meijer, Marieke; Schut, Desiree; Jacobs, Gerbren; Li, Ka Wan; Hjorth, J J Johannes; Meredith, Rhiannon M; Toonen, Ruud F; Van Kesteren, Ronald E; Smit, August B; Verhage, Matthijs; Heine, Vivi M

    2017-01-01

    Generation of neuronal cultures from induced pluripotent stem cells (hiPSCs) serve the studies of human brain disorders. However we lack neuronal networks with balanced excitatory-inhibitory activities, which are suitable for single cell analysis. We generated low-density networks of hPSC-derived GABAergic and glutamatergic cortical neurons. We used two different co-culture models with astrocytes. We show that these cultures have balanced excitatory-inhibitory synaptic identities using confocal microscopy, electrophysiological recordings, calcium imaging and mRNA analysis. These simple and robust protocols offer the opportunity for single-cell to multi-level analysis of patient hiPSC-derived cortical excitatory-inhibitory networks; thereby creating advanced tools to study disease mechanisms underlying neurodevelopmental disorders.

  20. Fault-Mechanism Simulator

    ERIC Educational Resources Information Center

    Guyton, J. W.

    1972-01-01

    An inexpensive, simple mechanical model of a fault can be produced to simulate the effects leading to an earthquake. This model has been used successfully with students from elementary to college levels and can be demonstrated to classes as large as thirty students. (DF)

  1. A detailed comparison of optimality and simplicity in perceptual decision-making

    PubMed Central

    Shen, Shan; Ma, Wei Ji

    2017-01-01

    Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259

  2. Tier 1 Rice Model for Estimating Pesticide Concentrations in Rice Paddies

    EPA Science Inventory

    The Tier 1 Rice Model estimates screening level aquatic concentrations of pesticides in rice paddies. It is a simple pesticide soil:water partitioning model with default values for water volume, soil mass, and organic carbon. Pesticide degradation is not considered in the mode...

  3. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  4. Anticipatory Cognitive Systems: a Theoretical Model

    NASA Astrophysics Data System (ADS)

    Terenzi, Graziano

    This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.

  5. Hall effect analysis in irradiated silicon samples with different resistivities

    NASA Astrophysics Data System (ADS)

    Borchi, E.; Bruzzi, M.; Dezillie, B.; Lazanu, S.; Li, Z.; Pirollo, S.

    1999-08-01

    The changes induced by neutron irradiation in n- and p-type silicon samples with starting resistivities from 10 /spl Omega/-cm up to 30 k/spl Omega/-cm, grown using different techniques, as float-zone (FZ), Czochralski (CZ) and epitaxial, have been analyzed by Van der Pauw and Hall effect measurements. Increasing the fluence, each set of samples evolves toward a quasi-intrinsic p-type material. This behavior has been explained in the frame of a two-level model, that considers the introduction during irradiation of mainly two defects. A deep acceptor and a deep donor, probably related to the divacancy and to the C/sub i/O/sub i/ complex, are placed in the upper and lower half of the forbidden gap, respectively. This simple model explains quantitatively the data on resistivity and Hall coefficient of each set of samples up to the fluence of /spl ap/10/sup 14/ n/cm/sup 2/.

  6. On Two-Scale Modelling of Heat and Mass Transfer

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.

    2008-09-01

    Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.

  7. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  8. Laminar and turbulent heating predictions for mars entry vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Yan, Chao; Zheng, Weilin; Zhong, Kang; Geng, Yunfei

    2016-11-01

    Laminar and turbulent heating rates play an important role in the design of Mars entry vehicles. Two distinct gas models, thermochemical non-equilibrium (real gas) model and perfect gas model with specified effective specific heat ratio, are utilized to investigate the aerothermodynamics of Mars entry vehicle named Mars Science Laboratory (MSL). Menter shear stress transport (SST) turbulent model with compressible correction is implemented to take account of the turbulent effect. The laminar and turbulent heating rates of the two gas models are compared and analyzed in detail. The laminar heating rates predicted by the two gas models are nearly the same at forebody of the vehicle, while the turbulent heating environments predicted by the real gas model are severer than the perfect gas model. The difference of specific heat ratio between the two gas models not only induces the flow structure's discrepancy but also increases the heating rates at afterbody of the vehicle obviously. Simple correlations for turbulent heating augmentation in terms of laminar momentum thickness Reynolds number, which can be employed as engineering level design and analysis tools, are also developed from numerical results. At the time of peak heat flux on the +3σ heat load trajectory, the maximum value of momentum thickness Reynolds number at the MSL's forebody is about 500, and the maximum value of turbulent augmentation factor (turbulent heating rates divided by laminar heating rates) is 5 for perfect gas model and 8 for real gas model.

  9. A Comparison of Natural Language Processing Methods for Automated Coding of Motivational Interviewing.

    PubMed

    Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek

    2016-06-01

    Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Photon-Z mixing the Weinberg-Salam model: Effective charges and the a = -3 gauge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baulieu, L.; Coquereaux, R.

    1982-04-15

    We study some properties of the Weinberg-Salam model connected with the photon-Z mixing. We solve the linear Dyson-Schwinger equations between full and 1PI boson propagators. The task is made easier, by the two-point function Ward identities that we derive to all orders and in any gauge. Some aspects of the renormalization of the model are also discussed. We display the exact mass-dependent one-loop two-point functions involving the photon and Z field in any linear xi-gauge. The special gauge a = xi/sup -1/ = -3 is shown to play a peculiar role. In this gauge, the Z field is multiplicatively renormalizablemore » (at the one-loop level), and one can construct both electric and weak effective charges of the theory from the photon and Z propagators, with a very simple expression similar to that of the QED Petermann, Stueckelberg, Gell-Mann and Low charge.« less

  11. Calculation of tip clearance effects in a transonic compressor rotor

    NASA Technical Reports Server (NTRS)

    Chima, R. V.

    1996-01-01

    The flow through the tip clearance region of a transonic compressor rotor (NASA rotor 37) was computed and compared to aerodynamic probe and laser anemometer data. Tip clearance effects were modeled both by gridding the clearance gap and by using a simple periodicity model across the ungridded gap. The simple model was run with both the full gap height, and with half the gap height to simulate a vena-contracta effect. Comparisons between computed and measured performance maps and downstream profiles were used to validate the models and to assess the effects of gap height on the simple clearance model. Recommendations were made concerning the use of the simple clearance model. Detailed comparisons were made between the gridded clearance gap solution and the laser anemometer data near the tip at two operating points. The computer results agreed fairly well with the data but overpredicted the extent of the casing separation and underpredicted the wake decay rate. The computations were then used to describe the interaction of the tip vortex, the passage shock, and the casing boundary layer.

  12. Checking the validity of Busquet's ionization temperature with detailed collisional radiative models.

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Bar-Shalom, A.

    1997-12-01

    Busquet's RADIOM model for effective ionization temperature Tz is an appealing and simple way to introduce non LTE effects in hydrocodes. The authors report checking the validity of RADIOM in the optically thin case by comparison with two collisional radiative models, MICCRON (level-by-level) for C and Al and SCROLL (superconfiguration- by-superconfiguration) for Lu and Au. MICCRON is described in detail. The agreement between the average ion charge >Z< and the corresponding Tz obtained from RADIOM on the one hand, and from MICCRON on the other hand for C and Al is excellent. The absorption spectra at Tz agree very well with the one generated by SCROLL near LTE conditions (small β). Farther from LTE (large β) the agreement is still good, but another effective temperature gives an excellent agreement. It is concluded that the model of Busquet is very good in most cases. There is however room for improvement when the departure from LTE is more pronounced for heavy atoms and for emissivity. Improvement appears possible because the concept of ionization temperature seems to hold in a broader range of parameters.

  13. Exploring the Scope of Controlling Quantum Phenomena

    DTIC Science & Technology

    2012-12-12

    them as one level.  Two cases of the systems are shown to be equivalent to  effective  two‐level systems. When the pulse is weak, simple relations...along the optical path, and management of this  effect  is used to  achieve spatial localization of TPA. Other control objectives were successfully...the energy levels of the system. The alignment of a rigid diatomic  rotor  is studied as a model system.  The theoretical estimates of PQC behavior are

  14. Teaching New Keynesian Open Economy Macroeconomics at the Intermediate Level

    ERIC Educational Resources Information Center

    Bofinger, Peter; Mayer, Eric; Wollmershauser, Timo

    2009-01-01

    For the open economy, the workhorse model in intermediate textbooks still is the Mundell-Fleming model, which basically extends the investment and savings, liquidity preference and money supply (IS-LM) model to open economy problems. The authors present a simple New Keynesian model of the open economy that introduces open economy considerations…

  15. CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography

    PubMed Central

    Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco

    2014-01-01

    Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261

  16. A simple model for estimating a magnetic field in laser-driven coils

    DOE PAGES

    Fiksel, Gennady; Fox, William; Gao, Lan; ...

    2016-09-26

    Magnetic field generation by laser-driven coils is a promising way of magnetizing plasma in laboratory high-energy-density plasma experiments. A typical configuration consists of two electrodes—one electrode is irradiated with a high-intensity laser beam and another electrode collects charged particles from the expanding plasma. The two electrodes are separated by a narrow gap forming a capacitor-like configuration and are connected with a conducting wire-coil. The charge-separation in the expanding plasma builds up a potential difference between the electrodes that drives the electrical current in the coil. A magnetic field of tens to hundreds of Teslas generated inside the coil has beenmore » reported. This paper presents a simple model that estimates the magnetic field using simple assumptions. Lastly, the results are compared with the published experimental data.« less

  17. Acoustic Shielding for a Model Scale Counter-rotation Open Rotor

    NASA Technical Reports Server (NTRS)

    Stephens, David B.; Edmane, Envia

    2012-01-01

    The noise shielding benefit of installing an open rotor above a simplified wing or tail is explored experimentally. The test results provide both a benchmark data set for validating shielding prediction tools and an opportunity for a system level evaluation of the noise reduction potential of propulsion noise shielding by an airframe component. A short barrier near the open rotor was found to provide up to 8.5 dB of attenuation at some directivity angles, with tonal sound particularly well shielded. Predictions from two simple shielding theories were found to overestimate the shielding benefit.

  18. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  19. Examining the Simple View of Reading among Subgroups of Spanish-Speaking English Language Learners

    ERIC Educational Resources Information Center

    Grimm, Ryan Ponce

    2015-01-01

    The Simple View of Reading (SVR; Gough & Tunmer, 1986; Hoover & Gough, 1990) has a longstanding history as a model of reading comprehension, but it has mostly been applied to native English speakers. The SVR posits reading comprehension is a function of the interaction between word-level reading skills and oral language skills. It has been…

  20. A Global Climate Model for Instruction.

    ERIC Educational Resources Information Center

    Burt, James E.

    This paper describes a simple global climate model useful in a freshman or sophomore level course in climatology. There are three parts to the paper. The first part describes the model, which is a global model of surface air temperature averaged over latitude and longitude. Samples of the types of calculations performed in the model are provided.…

  1. Assessing the Impact of Retreat Mechanisms in a Simple Antarctic Ice Sheet Model Using Bayesian Calibration.

    PubMed

    Ruckert, Kelsey L; Shaffer, Gary; Pollard, David; Guan, Yawen; Wong, Tony E; Forest, Chris E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing climate forcings is an important driver of sea-level changes. Anthropogenic climate change may drive a sizeable AIS tipping point response with subsequent increases in coastal flooding risks. Many studies analyzing flood risks use simple models to project the future responses of AIS and its sea-level contributions. These analyses have provided important new insights, but they are often silent on the effects of potentially important processes such as Marine Ice Sheet Instability (MISI) or Marine Ice Cliff Instability (MICI). These approximations can be well justified and result in more parsimonious and transparent model structures. This raises the question of how this approximation impacts hindcasts and projections. Here, we calibrate a previously published and relatively simple AIS model, which neglects the effects of MICI and regional characteristics, using a combination of observational constraints and a Bayesian inversion method. Specifically, we approximate the effects of missing MICI by comparing our results to those from expert assessments with more realistic models and quantify the bias during the last interglacial when MICI may have been triggered. Our results suggest that the model can approximate the process of MISI and reproduce the projected median melt from some previous expert assessments in the year 2100. Yet, our mean hindcast is roughly 3/4 of the observed data during the last interglacial period and our mean projection is roughly 1/6 and 1/10 of the mean from a model accounting for MICI in the year 2100. These results suggest that missing MICI and/or regional characteristics can lead to a low-bias during warming period AIS melting and hence a potential low-bias in projected sea levels and flood risks.

  2. Parameter Estimation for Simultaneous Saccharification and Fermentation of Food Waste Into Ethanol Using Matlab Simulink

    NASA Astrophysics Data System (ADS)

    Davis, Rebecca Anne

    The increase in waste disposal and energy costs has provided an incentive to convert carbohydrate-rich food waste streams into fuel. For example, dining halls and restaurants discard foods that require tipping fees for removal. An effective use of food waste may be the enzymatic hydrolysis of the waste to simple sugars and fermentation of the sugars to ethanol. As these wastes have complex compositions which may change day-to-day, experiments were carried out to test fermentability of two different types of food waste at 27° C using Saccharomyces cerevisiae yeast (ATCC4124) and Genencor's STARGEN™ enzyme in batch simultaneous saccharification and fermentation (SSF) experiments. A mathematical model of SSF based on experimentally matched rate equations for enzyme hydrolysis and yeast fermentation was developed in Matlab Simulink®. Using Simulink® parameter estimation 1.1.3, parameters for hydrolysis and fermentation were estimated through modified Michaelis-Menten and Monod-type equations with the aim of predicting changes in the levels of ethanol and glycerol from different initial concentrations of glucose, fructose, maltose, and starch. The model predictions and experimental observations agree reasonably well for the two food waste streams and a third validation dataset. The approach of using Simulink® as a dynamic visual model for SSF represents a simple method which can be applied to a variety of biological pathways and may be very useful for systems approaches in metabolic engineering in the future.

  3. An Open Learner Model for Trainee Pilots

    ERIC Educational Resources Information Center

    Gakhal, Inderdip; Bull, Susan

    2008-01-01

    This paper investigates the potential for simple open learner models for highly motivated, independent learners, using the example of trainee pilots. In particular we consider whether such users access their learner model to help them identify their current knowledge level, areas of difficulty and specific misconceptions, to help them plan their…

  4. Here We Go Round the M25

    ERIC Educational Resources Information Center

    McCartney, Mark; Walsh, Ian

    2006-01-01

    A simple model for how traffic moves around a closed loop of road is introduced. The consequent analysis of the model can be used as an application of techniques taught at first year undergraduate level, and as a motivator to encourage students to think critically about model formulation and interpretation.

  5. Basinwide response of the Atlantic Meridional Overturning Circulation to interannual wind forcing

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    2017-12-01

    An eddy-resolving Ocean general circulation model For the Earth Simulator (OFES) and a simple wind-driven two-layer model are used to investigate the role of momentum fluxes in driving the Atlantic Meridional Overturning Circulation (AMOC) variability throughout the Atlantic basin from 1950 to 2010. Diagnostic analysis using the OFES results suggests that interior baroclinic Rossby waves and coastal topographic waves play essential roles in modulating the AMOC interannual variability. The proposed mechanisms are verified in the context of a simple two-layer model with realistic topography and only forced by surface wind. The topographic waves communicate high-latitude anomalies into lower latitudes and account for about 50% of the AMOC interannual variability in the subtropics. In addition, the large scale Rossby waves excited by wind forcing together with topographic waves set up coherent AMOC interannual variability patterns across the tropics and subtropics. The comparisons between the simple model and OFES results suggest that a large fraction of the AMOC interannual variability in the Atlantic basin can be explained by wind-driven dynamics.

  6. Phenomenology of wall-bounded Newtonian turbulence.

    PubMed

    L'vov, Victor S; Pomyalov, Anna; Procaccia, Itamar; Zilitinkevich, Sergej S

    2006-01-01

    We construct a simple analytic model for wall-bounded turbulence, containing only four adjustable parameters. Two of these parameters are responsible for the viscous dissipation of the components of the Reynolds stress tensor. The other two parameters control the nonlinear relaxation of these objects. The model offers an analytic description of the profiles of the mean velocity and the correlation functions of velocity fluctuations in the entire boundary region, from the viscous sublayer, through the buffer layer, and further into the log-law turbulent region. In particular, the model predicts a very simple distribution of the turbulent kinetic energy in the log-law region between the velocity components: the streamwise component contains a half of the total energy whereas the wall-normal and cross-stream components contain a quarter each. In addition, the model predicts a very simple relation between the von Kármán slope k and the turbulent velocity in the log-law region v+ (in wall units): v+=6k. These predictions are in excellent agreement with direct numerical simulation data and with recent laboratory experiments.

  7. A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.

    2007-01-01

    Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.

  8. Acoustic propagation in a thermally stratified atmosphere

    NASA Technical Reports Server (NTRS)

    Vanmoorhem, W. K.

    1985-01-01

    This report describes the activities during the fifth six month period of the investigation of acoustic propagation in the atmosphere with a realistic temperature profile. Progress has been achieved in two major directions: comparisons between the lapse model and experimental data taken by NASA during the second tower experiment, and development of a model propagation in an inversion. Data from the second tower experiment became available near the end of 1984 and some comparisons have been carried out, but this work is not complete. Problems with the temperature profiler during the experiment have produced temperature profiles that are difficult to fit the assumed variation of temperature with height, but in cases where reasonable fits have been obtained agreement between the model and the experiments are close. The major weaknesses in the model appear to be the presence of discontinuities in some regions, the low sound levels predicted near the source height, and difficulties with the argument of the Hankel function being outside the allowable range. Work on the inversion model has progressed slowly, and the rays for that case are discussed along with a simple energy conservation model of sound level enhancement in the inversion case.

  9. A method for modelling GP practice level deprivation scores using GIS

    PubMed Central

    Strong, Mark; Maheswaran, Ravi; Pearson, Tim; Fryers, Paul

    2007-01-01

    Background A measure of general practice level socioeconomic deprivation can be used to explore the association between deprivation and other practice characteristics. An area-based categorisation is commonly chosen as the basis for such a deprivation measure. Ideally a practice population-weighted area-based deprivation score would be calculated using individual level spatially referenced data. However, these data are often unavailable. One approach is to link the practice postcode to an area-based deprivation score, but this method has limitations. This study aimed to develop a Geographical Information Systems (GIS) based model that could better predict a practice population-weighted deprivation score in the absence of patient level data than simple practice postcode linkage. Results We calculated predicted practice level Index of Multiple Deprivation (IMD) 2004 deprivation scores using two methods that did not require patient level data. Firstly we linked the practice postcode to an IMD 2004 score, and secondly we used a GIS model derived using data from Rotherham, UK. We compared our two sets of predicted scores to "gold standard" practice population-weighted scores for practices in Doncaster, Havering and Warrington. Overall, the practice postcode linkage method overestimated "gold standard" IMD scores by 2.54 points (95% CI 0.94, 4.14), whereas our modelling method showed no such bias (mean difference 0.36, 95% CI -0.30, 1.02). The postcode-linked method systematically underestimated the gold standard score in less deprived areas, and overestimated it in more deprived areas. Our modelling method showed a small underestimation in scores at higher levels of deprivation in Havering, but showed no bias in Doncaster or Warrington. The postcode-linked method showed more variability when predicting scores than did the GIS modelling method. Conclusion A GIS based model can be used to predict a practice population-weighted area-based deprivation measure in the absence of patient level data. Our modelled measure generally had better agreement with the population-weighted measure than did a postcode-linked measure. Our model may also avoid an underestimation of IMD scores in less deprived areas, and overestimation of scores in more deprived areas, seen when using postcode linked scores. The proposed method may be of use to researchers who do not have access to patient level spatially referenced data. PMID:17822545

  10. Simple inflationary quintessential model. II. Power law potentials

    NASA Astrophysics Data System (ADS)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-09-01

    The present work is a sequel of our previous work [Phys. Rev. D 93, 084018 (2016)] which depicted a simple version of an inflationary quintessential model whose inflationary stage was described by a Higgs-type potential and the quintessential phase was responsible due to an exponential potential. Additionally, the model predicted a nonsingular universe in past which was geodesically past incomplete. Further, it was also found that the model is in agreement with the Planck 2013 data when running is allowed. But, this model provides a theoretical value of the running which is far smaller than the central value of the best fit in ns , r , αs≡d ns/d l n k parameter space where ns, r , αs respectively denote the spectral index, tensor-to-scalar ratio and the running of the spectral index associated with any inflationary model, and consequently to analyze the viability of the model one has to focus in the two-dimensional marginalized confidence level in the allowed domain of the plane (ns,r ) without taking into account the running. Unfortunately, such analysis shows that this model does not pass this test. However, in this sequel we propose a family of models runs by a single parameter α ∈[0 ,1 ] which proposes another "inflationary quintessential model" where the inflation and the quintessence regimes are respectively described by a power law potential and a cosmological constant. The model is also nonsingular although geodesically past incomplete as in the cited model. Moreover, the present one is found to be more simple compared to the previous model and it is in excellent agreement with the observational data. In fact, we note that, unlike the previous model, a large number of the models of this family with α ∈[0 ,1/2 ) match with both Planck 2013 and Planck 2015 data without allowing the running. Thus, the properties in the current family of models compared to its past companion justify its need for a better cosmological model with the successive improvement of the observational data.

  11. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  12. Membrane Interaction of Antimicrobial Peptides Using E. coli Lipid Extract as Model Bacterial Cell Membranes and SFG Spectroscopy

    PubMed Central

    Soblosky, Lauren; Ramamoorthy, Ayyalusamy; Chen, Zhan

    2015-01-01

    Supported lipid bilayers are used as a convenient model cell membrane system to study biologically important molecule-lipid interactions in situ. However, the lipid bilayer models are often simple and the acquired results with these models may not provide all pertinent information related to a real cell membrane. In this work, we use sum frequency generation (SFG) vibrational spectroscopy to study molecular-level interactions between the antimicrobial peptides (AMPs) MSI-594, ovispirin-1 G18, magainin 2 and a simple 1,2-dipalmitoyl-d62-sn-glycero-3-phosphoglycerol (dDPPG)-1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoglycerol (POPG) bilayer. We compared such interactions to those between the AMPs and a more complex dDPPG/E. coli polar lipid extract bilayer. We show that to fully understand more complex aspects of peptide-bilayer interaction, such as interaction kinetics, a heterogeneous lipid composition is required, such as the E. coli polar lipid extract. The discrepancy in peptide-bilayer interaction is likely due in part to the difference in bilayer charge between the two systems since highly negative charged lipids can promote more favorable electrostatic interactions between the peptide and lipid bilayer. Results presented in this paper indicate that more complex model bilayers are needed to accurately analyze peptide-cell membrane interactions and demonstrates the importance of using an appropriate lipid composition to study AMP interaction properties. PMID:25707312

  13. Intrinsic Fluctuations and Driven Response of Insect Swarms

    NASA Astrophysics Data System (ADS)

    Ni, Rui; Puckett, James G.; Dufresne, Eric R.; Ouellette, Nicholas T.

    2015-09-01

    Animals of all sizes form groups, as acting together can convey advantages over acting alone; thus, collective animal behavior has been identified as a promising template for designing engineered systems. However, models and observations have focused predominantly on characterizing the overall group morphology, and often focus on highly ordered groups such as bird flocks. We instead study a disorganized aggregation (an insect mating swarm), and compare its natural fluctuations with the group-level response to an external stimulus. We quantify the swarm's frequency-dependent linear response and its spectrum of intrinsic fluctuations, and show that the ratio of these two quantities has a simple scaling with frequency. Our results provide a new way of comparing models of collective behavior with experimental data.

  14. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  15. No way out? The double-bind in seeking global prosperity alongside mitigated climate change

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2012-01-01

    In a prior study (Garrett, 2011), I introduced a simple economic growth model designed to be consistent with general thermodynamic laws. Unlike traditional economic models, civilization is viewed only as a well-mixed global whole with no distinction made between individual nations, economic sectors, labor, or capital investments. At the model core is a hypothesis that the global economy's current rate of primary energy consumption is tied through a constant to a very general representation of its historically accumulated wealth. Observations support this hypothesis, and indicate that the constant's value is λ = 9.7 ± 0.3 milliwatts per 1990 US dollar. It is this link that allows for treatment of seemingly complex economic systems as simple physical systems. Here, this growth model is coupled to a linear formulation for the evolution of globally well-mixed atmospheric CO2 concentrations. While very simple, the coupled model provides faithful multi-decadal hindcasts of trajectories in gross world product (GWP) and CO2. Extending the model to the future, the model suggests that the well-known IPCC SRES scenarios substantially underestimate how much CO2 levels will rise for a given level of future economic prosperity. For one, global CO2 emission rates cannot be decoupled from wealth through efficiency gains. For another, like a long-term natural disaster, future greenhouse warming can be expected to act as an inflationary drag on the real growth of global wealth. For atmospheric CO2 concentrations to remain below a "dangerous" level of 450 ppmv (Hansen et al., 2007), model forecasts suggest that there will have to be some combination of an unrealistically rapid rate of energy decarbonization and nearly immediate reductions in global civilization wealth. Effectively, it appears that civilization may be in a double-bind. If civilization does not collapse quickly this century, then CO2 levels will likely end up exceeding 1000 ppmv; but, if CO2 levels rise by this much, then the risk is that civilization will gradually tend towards collapse.

  16. Objective evaluation of surgical competency for minimally invasive surgery with a collection of simple tests

    PubMed Central

    Gonzalez-Neira, Eliana Maria; Jimenez-Mendoza, Claudia Patricia; Rugeles-Quintero, Saul

    2016-01-01

    Objective: This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. Methods: An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Results: Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners' level from intermediate, and other seven tests differentiate intermediate level from expert. Conclusion: The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research. PMID:27226664

  17. Objective evaluation of surgical competency for minimally invasive surgery with a collection of simple tests.

    PubMed

    Gonzalez-Neira, Eliana Maria; Jimenez-Mendoza, Claudia Patricia; Suarez, Daniel R; Rugeles-Quintero, Saul

    2016-03-30

    This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners' level from intermediate, and other seven tests differentiate intermediate level from expert. The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research.

  18. Evaluation of Savannah River Plant emergency response models using standard and nonstandard meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoel, D.D.

    1984-01-01

    Two computer codes have been developed for operational use in performing real time evaluations of atmospheric releases from the Savannah River Plant (SRP) in South Carolina. These codes, based on mathematical models, are part of the SRP WIND (Weather Information and Display) automated emergency response system. Accuracy of ground level concentrations from a Gaussian puff-plume model and a two-dimensional sequential puff model are being evaluated with data from a series of short range diffusion experiments using sulfur hexafluoride as a tracer. The models use meteorological data collected from 7 towers on SRP and at the 300 m WJBF-TV tower aboutmore » 15 km northwest of SRP. The winds and the stability, which is based on turbulence measurements, are measured at the 60 m stack heights. These results are compared to downwind concentrations using only standard meteorological data, i.e., adjusted 10 m winds and stability determined by the Pasquill-Turner stability classification method. Scattergrams and simple statistics were used for model evaluations. Results indicate predictions within accepted limits for the puff-plume code and a bias in the sequential puff model predictions using the meteorologist-adjusted nonstandard data. 5 references, 4 figures, 2 tables.« less

  19. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  20. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    PubMed Central

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  1. Simulating initial attack with two fire containment models

    Treesearch

    Romain M. Mees

    1985-01-01

    Given a variable rate of fireline construction and an elliptical fire growth model, two methods for estimating the required number of resources, time to containment, and the resulting fire area were compared. Five examples illustrate some of the computational differences between the simple and the complex methods. The equations for the two methods can be used and...

  2. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  3. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1988-01-01

    Describes two demonstrations for college level chemistry courses including: "Electrochemical Cells Using Sodium Silicate" and "A Simple, Vivid Demonstration of Selective Precipitation." Lists materials, preparation, procedures, and precautions. (CW)

  4. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  5. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  6. A simple kinetic model of a Ne-H2 Penning-plasma laser

    NASA Astrophysics Data System (ADS)

    Petrov, G. M.; Stefanova, M. S.; Pramatarov, P. M.

    1995-09-01

    A simple kinetic model of the Ne-H2 Penning-Plasma Laser (PPL) (NeI 585.3 nm) is proposed. The negative glow of a hollow cathode discharge at intermediate pressures is considered as the active medium. The balance equations for the upper and lower laser levels, electrons, ions and electron energy are solved. The dependences of the laser gain on the discharge conditions (Ne and H2 partial pressures, discharge current) are calculated and measured. The calculated values are in a good agreement with the experimental data.

  7. Polycrystalline ZrTe{sub 5} Parameterized as a Narrow Band Gap Semiconductor for Thermoelectric Performance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Samuel A.; Witting, Ian; Aydemir, Umut

    The transition-metal pentatellurides HfTe5 and ZrTe5 have been studied for their exotic transport properties with much debate over the transport mechanism, band gap, and cause of the resistivity behavior, including a large low-temperature resistivity peak. Single crystals grown by the chemical-vapor-transport method have shown an n-p transition of the Seebeck coefficient at the same temperature as a peak in the resistivity. We show that behavior similar to that of single crystals can be observed in iodine-doped polycrystalline samples but that undoped polycrystalline samples exhibit drastically different properties: they are p type over the entire temperature range. Additionally, the thermal conductivitymore » for polycrystalline samples is much lower, 1.5 Wm -1 K -1, than previously reported for single crystals. It is found that the polycrystalline ZrTe 5 system can be modeled as a simple semiconductor with conduction and valence bands both contributing to transport, separated by a band gap of 20 meV. This model demonstrates to first order that a simple two-band model can explain the transition from n- to p-type behavior and the cause of the anomalous resistivity peak. Combined with the experimental data, the two-band model shows that carrier concentration variation is responsible for differences in behavior between samples. Using the twoband model, the thermoelectric performance at different doping levels is predicted, finding zT =0.2 and 0.1 for p and n type, respectively, at 300 K, and zT= 0.23 and 0.32 for p and n type at 600 K. Given the reasonably high zT that is comparable in magnitude for both n and p type, a thermoelectric device with a single compound used for both legs is feasible.« less

  8. [Effect of CPAP therapy on dynamic glucose level in OSAHS patients with newly diagnosed T2DM].

    PubMed

    Zhao, Lijun; Hui, Peilin; Xie, Yuping; Hou, Yiping; Wei, Xiaoquan; Ma, Wei; Wang, Jinfeng; Zhou, Liya; Zhang, Wenjuan

    2015-11-24

    To investigate the characteristic of dynamic glucose level in obstructive sleep apnea-hypopnea syndrome (OSAHS) patients with newly diagnosed type 2 diabetes mellitus (T2DM) and to evaluate the effect of continuous positive airway pressure (CPAP) treatment on the glucose level. A total of 65 cases of patients with T2DM who were newly diagnosed by oral glucose tolerance test (OGTT) were enrolled from April 2014 to April 2015 in Gansu Provincial Hospital, and divided into simple T2DM group (n=30) and OSAHS with T2DM group (n=35) according to aponea-hypopnea index (AHI) which was monitored by polysomnography (PSG). Their general clinical data were collected, and glucose level of different periods was monitored by continuous glucose moitoring system (CGMS). Changes of glucose level were compared between two groups before and after CPAP treatment. Age, gender proportion, BMI, smoking and drinking history, glycosylated hemoglobin (HbA1c) and blood lipid profile had no significantly difference between two groups. Longer neck circumstance and higher waist-hip ration (WHR), higher systolic blood pressure and diastolic blood pressure, higher fasting plasma glucose (FPG) [(9.4 ± 3.2) vs (7.3 ± 2.1) mmol/L, P=0.028] and fasting insulin (FINS) [(19.2 ± 8.7) vs (11.1 ± 4.7) mU/L, P=0.044] level, more serious homeostasis model assessment insulin resistance (HOMA-IR) were found in OSAHS patients with T2DM when compared to patients in simple T2DM group. The average dynamic glucose level of 24 hours, daytime, nocturnal and sleep time in OSAHS with T2DM group were higher than that in the simple T2DM group (all P<0.05). The alarming times when the average dynamic glucose level of nocturnal time was more than 0.1 mmol·L⁻¹·min⁻¹ in T2DM with OSAHS was more than that in control group (P=0.001). After treatment of CPAP, the level of AHI [(5.9 ± 3.6) vs (56.7 ± 11.4) times/h, P<0.001], average dynamic glucose level of 24 hours, day, nocturnal and sleep time were obviously decreased (all P<0.05); lowest saturation oxygen (LSpO₂) was significantly increased [(92.3 ± 3.7)% vs (81.5 ± 20.2)%, P<0.001]; the alarming times and HOMA-IR were obviously decreased (P=0.019, 0.043). According to multiple linear regression analysis, the AHI (β=0.736, P<0.001) in OSAHS with T2DM group was positively related to the average dynamic glucose level during sleep time, but the LSpO₂(β=-0.889, P<0.001) was negatively correlated. OSAHS patients with newly diagnosed T2DM have higher glucose level than that in simple T2DM patients, and CPAP therapy can obviously decrease the glucose level in newly diagnosed T2DM patients with OSAHS. AHI and LSpO₂may influence the average dynamic glucose level during sleep time.

  9. A simple nonlocal damage model for predicting failure of notched laminates

    NASA Technical Reports Server (NTRS)

    Kennedy, T. C.; Nahan, M. F.

    1995-01-01

    The ability to predict failure loads in notched composite laminates is a requirement in a variety of structural design circumstances. A complicating factor is the development of a zone of damaged material around the notch tip. The objective of this study was to develop a computational technique that simulates progressive damage growth around a notch in a manner that allows the prediction of failure over a wide range of notch sizes. This was accomplished through the use of a relatively simple, nonlocal damage model that incorporates strain-softening. This model was implemented in a two-dimensional finite element program. Calculations were performed for two different laminates with various notch sizes under tensile loading, and the calculations were found to correlate well with experimental results.

  10. Coronal loop hydrodynamics. The solar flare observed on November 12, 1980 revisited: The UV line emission

    NASA Astrophysics Data System (ADS)

    Betta, R. M.; Peres, G.; Reale, F.; Serio, S.

    2001-12-01

    We revisit a well-studied solar flare whose X-ray emission originating from a simple loop structure was observed by most of the instruments on board SMM on November 12, 1980. The X-ray emission of this flare, as observed with the XRP, was successfully modeled previously. Here we include a detailed modeling of the transition region and we compare the hydrodynamic results with the UVSP observations in two EUV lines, measured in areas smaller than the XRP rasters, covering only some portions of the flaring loop (the top and the foot-points). The single loop hydrodynamic model, which fits well the evolution of coronal lines (those observed with the XRP and the Fe XXI 1354.1 Å line observed with the UVSP) fails to model the flux level and evolution of the O V 1371.3 Åline.

  11. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    PubMed

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  12. Thalamic neuron models encode stimulus information by burst-size modulation

    PubMed Central

    Elijah, Daniel H.; Samengo, Inés; Montemurro, Marcelo A.

    2015-01-01

    Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons. PMID:26441623

  13. Isca, v1.0: a framework for the global modelling of the atmospheres of Earth and other planets at varying levels of complexity

    NASA Astrophysics Data System (ADS)

    Vallis, Geoffrey K.; Colyer, Greg; Geen, Ruth; Gerber, Edwin; Jucker, Martin; Maher, Penelope; Paterson, Alexander; Pietschnig, Marianne; Penn, James; Thomson, Stephen I.

    2018-03-01

    Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.

  14. Thalamic neuron models encode stimulus information by burst-size modulation.

    PubMed

    Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A

    2015-01-01

    Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.

  15. English Spelling: The Simple, The Fancy, The Insane, The Tricky, and The Scrunched Up. "Great Idea" Reprint Series #612.

    ERIC Educational Resources Information Center

    McCabe, Don

    The author explains the five-pronged approach to reading and spelling through classifying words into "simple,""fancy,""insane,""tricky," and "scrunched up" categories, and reports average gains of two grade levels in one semester by junior high school students with severe behavioral problems who learned the approach. Examples of the five word…

  16. ASSESSMENT OF SPATIAL AUTOCORRELATION IN EMPIRICAL MODELS IN ECOLOGY

    EPA Science Inventory

    Statistically assessing ecological models is inherently difficult because data are autocorrelated and this autocorrelation varies in an unknown fashion. At a simple level, the linking of a single species to a habitat type is a straightforward analysis. With some investigation int...

  17. Effects of Cascaded Voltage Collapse and Protection of Many Induction Machine Loads upon Load Characteristics Viewed from Bulk Transmission System

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    As known well, two of the fundamental processes which give rise to voltage collapse in power systems are the on load tap changers of transformers and dynamic characteristics of loads such as induction machines. It has been well established that, comparing among these two, the former makes slower collapse while the latter makes faster. However, in realistic situations, the load level of each induction machine is not uniform and it is well expected that only a part of loads collapses first, followed by collapse process of each load which did not go into instability during the preceding collapses. In such situations the over all equivalent collapse behavior viewed from bulk transmission level becomes somewhat different from the simple collapse driven by one aggregated induction machine. This paper studies the process of cascaded voltage collapse among many induction machines by time simulation, where load distribution on a feeder line is modeled by several hundreds of induction machines and static impedance loads. It is shown that in some cases voltage collapse really cascades among induction machines, where the macroscopic load dynamics viewed from upper voltage level makes slower collapse than expected by the aggregated load model. Also shown is the effects of machine protection of induction machines, which also makes slower collapse.

  18. A Simple Negative Interaction in the Positive Transcriptional Feedback of a Single Gene Is Sufficient to Produce Reliable Oscillations

    PubMed Central

    Miró-Bueno, Jesús M.; Rodríguez-Patón, Alfonso

    2011-01-01

    Negative and positive transcriptional feedback loops are present in natural and synthetic genetic oscillators. A single gene with negative transcriptional feedback needs a time delay and sufficiently strong nonlinearity in the transmission of the feedback signal in order to produce biochemical rhythms. A single gene with only positive transcriptional feedback does not produce oscillations. Here, we demonstrate that this single-gene network in conjunction with a simple negative interaction can also easily produce rhythms. We examine a model comprised of two well-differentiated parts. The first is a positive feedback created by a protein that binds to the promoter of its own gene and activates the transcription. The second is a negative interaction in which a repressor molecule prevents this protein from binding to its promoter. A stochastic study shows that the system is robust to noise. A deterministic study identifies that the dynamics of the oscillator are mainly driven by two types of biomolecules: the protein, and the complex formed by the repressor and this protein. The main conclusion of this paper is that a simple and usual negative interaction, such as degradation, sequestration or inhibition, acting on the positive transcriptional feedback of a single gene is a sufficient condition to produce reliable oscillations. One gene is enough and the positive transcriptional feedback signal does not need to activate a second repressor gene. This means that at the genetic level an explicit negative feedback loop is not necessary. The model needs neither cooperative binding reactions nor the formation of protein multimers. Therefore, our findings could help to clarify the design principles of cellular clocks and constitute a new efficient tool for engineering synthetic genetic oscillators. PMID:22205920

  19. On the validity of the amphoteric-defect model in gallium arsenide and a criterion for Fermi-level pinning by defects

    NASA Astrophysics Data System (ADS)

    Chen, C.-H.; Tan, T. Y.

    1995-10-01

    Using the theoretically calculated point-defect total-energy values of Baraff and Schlüter in GaAs, an amphoteric-defect model has been proposed by Walukiewicz to explain a large number of experimental results. The suggested amphoteric-defect system consists of two point-defect species capable of transforming into each other: the doubly negatively charged Ga vacancy V {Ga/2-} and the triply positively charged defect complex (ASGa+ V As)3+, with AsGa being the antisite defect of an As atom occupying a Ga site and V As being an As vacancy. When present in sufficiently high concentrations, the amphoteric defect system V {Ga/2-}/(AsGa+ V As)3+ is supposed to be able to pin the GaAs Fermi level at approximately the E v +0.6 eV level position, which requires that the net free energy of the V Ga/(AsGa+ V As) defect system to be minimum at the same Fermi-level position. We have carried out a quantitative study of the net energy of this defect system in accordance with the individual point-defect total-energy results of Baraff and Schlüter, and found that the minimum net defect-system-energy position is located at about the E v +1.2 eV level position instead of the needed E v +0.6 eV position. Therefore, the validity of the amphoteric-defect model is in doubt. We have proposed a simple criterion for determining the Fermi-level pinning position in the deeper part of the GaAs band gap due to two oppositely charged point-defect species, which should be useful in the future.

  20. Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less

  1. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.

    PubMed

    Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.

  2. Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages

    PubMed Central

    Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang

    2016-01-01

    This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668

  3. The Freter model: a simple model of biofilm formation.

    PubMed

    Jones, Don; Kojouharov, Hristo V; Le, Dung; Smith, Hal

    2003-08-01

    A simple, conceptual model of biofilm formation, due to R. Freter et al. (1983), is studied analytically and numerically in both CSTR and PFR. Two steady state regimes are identified, namely, the complete washout of the microbes from the reactor and the successful colonization of both the wall and bulk fluid. One of these is stable for any particular set of parameter values and sharp and explicit conditions are given for the stability of each. The effects of adding an anti-microbial agent to the CSTR are examined.

  4. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  5. Cellular Automata with Anticipation: Examples and Presumable Applications

    NASA Astrophysics Data System (ADS)

    Krushinsky, Dmitry; Makarenko, Alexander

    2010-11-01

    One of the most prospective new methodologies for modelling is the so-called cellular automata (CA) approach. According to this paradigm, the models are built from simple elements connected into regular structures with local interaction between neighbours. The patterns of connections usually have a simple geometry (lattices). As one of the classical examples of CA we mention the game `Life' by J. Conway. This paper presents two examples of CA with anticipation property. These examples include a modification of the game `Life' and a cellular model of crowd movement.

  6. Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuc H.; Matzner, Richard A.

    2012-01-01

    We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.

  7. Simple rules for a "simple" nervous system? Molecular and biomathematical approaches to enteric nervous system formation and malformation.

    PubMed

    Newgreen, Donald F; Dufour, Sylvie; Howard, Marthe J; Landman, Kerry A

    2013-10-01

    We review morphogenesis of the enteric nervous system from migratory neural crest cells, and defects of this process such as Hirschsprung disease, centering on cell motility and assembly, and cell adhesion and extracellular matrix molecules, along with cell proliferation and growth factors. We then review continuum and agent-based (cellular automata) models with rules of cell movement and logistical proliferation. Both movement and proliferation at the individual cell level are modeled with stochastic components from which stereotyped outcomes emerge at the population level. These models reproduced the wave-like colonization of the intestine by enteric neural crest cells, and several new properties emerged, such as colonization by frontal expansion, which were later confirmed biologically. These models predict a surprising level of clonal heterogeneity both in terms of number and distribution of daughter cells. Biologically, migrating cells form stable chains made up of unstable cells, but this is not seen in the initial model. We outline additional rules for cell differentiation into neurons, axon extension, cell-axon and cell-cell adhesions, chemotaxis and repulsion which can reproduce chain migration. After the migration stage, the cells re-arrange as a network of ganglia. Changes in cell adhesion molecules parallel this, and we describe additional rules based on Steinberg's Differential Adhesion Hypothesis, reflecting changing levels of adhesion in neural crest cells and neurons. This was able to reproduce enteric ganglionation in a model. Mouse mutants with disturbances of enteric nervous system morphogenesis are discussed, and these suggest future refinement of the models. The modeling suggests a relatively simple set of cell behavioral rules could account for complex patterns of morphogenesis. The model has allowed the proposal that Hirschsprung disease is mostly an enteric neural crest cell proliferation defect, not a defect of cell migration. In addition, the model suggests an explanations for zonal and skip segment variants of Hirschsprung disease, and also gives a novel stochastic explanation for the observed discordancy of Hirschsprung disease in identical twins. © 2013 Elsevier Inc. All rights reserved.

  8. Impact of electronic coupling, symmetry, and planarization on one- and two-photon properties of triarylamines with one, two, or three diarylboryl acceptors.

    PubMed

    Makarov, Nikolay S; Mukhopadhyay, Sukrit; Yesudas, Kada; Brédas, Jean-Luc; Perry, Joseph W; Pron, Agnieszka; Kivala, Milan; Müllen, Klaus

    2012-04-19

    We have performed a study of the one- and two-photon absorption properties of a systematically varied series of triarylamino-compounds with one, two, or three attached diarylborane arms arranged in linear dipolar, bent dipolar, and octupolar geometries. Two-photon fluorescence excitation spectra were measured over a wide spectral range with femtosecond laser pulses. We found that on going from the single-arm to the two- and three-arm systems, the peak in two-photon absorption (2PA) cross-section is suppressed by factors of 3-11 for the lowest excitonic level associated with the electronic coupling of the arms, whereas it is enhanced by factors of 4-8 for the higher excitonic level. These results show that the coupling of arms redistributes the 2PA cross-section between the excitonic levels in a manner that strongly favors the higher-energy excitonic level. The experimental data on one- and two-photon cross-sections, ground- and excited-state transition dipole moments, and permanent dipole moment differences between the ground and the lowest excited states were compared to the results obtained from a simple Frenkel exciton model and from highly correlated quantum-chemical calculations. It has been found that planarization of the structure around the triarylamine moiety leads to a sizable increase in peak 2PA cross-section for the lowest excitonic level of the two-arm system, whereas for the three-arm system, the corresponding peak was weakened and shifted to lower energy. Our studies show the importance of the interarm coupling, number of arms, and structural planarity on both the enhancement and the suppression of two-photon cross-sections in multiarm molecules. © 2012 American Chemical Society

  9. A finite area scheme for shallow granular flows on three-dimensional surfaces

    NASA Astrophysics Data System (ADS)

    Rauter, Matthias

    2017-04-01

    Shallow granular flow models have become a popular tool for the estimation of natural hazards, such as landslides, debris flows and avalanches. The shallowness of the flow allows to reduce the three-dimensional governing equations to a quasi two-dimensional system. Three-dimensional flow fields are replaced by their depth-integrated two-dimensional counterparts, which yields a robust and fast method [1]. A solution for a simple shallow granular flow model, based on the so-called finite area method [3] is presented. The finite area method is an adaption of the finite volume method [4] to two-dimensional curved surfaces in three-dimensional space. This method handles the three dimensional basal topography in a simple way, making the model suitable for arbitrary (but mildly curved) topography, such as natural terrain. Furthermore, the implementation into the open source software OpenFOAM [4] is shown. OpenFOAM is a popular computational fluid dynamics application, designed so that the top-level code mimics the mathematical governing equations. This makes the code easy to read and extendable to more sophisticated models. Finally, some hints on how to get started with the code and how to extend the basic model will be given. I gratefully acknowledge the financial support by the OEAW project "beyond dense flow avalanches". Savage, S. B. & Hutter, K. 1989 The motion of a finite mass of granular material down a rough incline. Journal of Fluid Mechanics 199, 177-215. Ferziger, J. & Peric, M. 2002 Computational methods for fluid dynamics, 3rd edn. Springer. Tukovic, Z. & Jasak, H. 2012 A moving mesh finite volume interface tracking method for surface tension dominated interfacial fluid flow. Computers & fluids 55, 70-84. Weller, H. G., Tabor, G., Jasak, H. & Fureby, C. 1998 A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in physics 12(6), 620-631.

  10. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  11. Simple Analytic Collisional Rates for non-LTE Vibrational Populations in Astrophysical Environments: the Cases of Circumstellar SiO Masers and Shocked H2

    NASA Astrophysics Data System (ADS)

    Bieniek, Ronald

    2008-05-01

    Rates for collisionally induced transitions between molecular vibrational levels are important in modeling a variety of non-LTE processes in astrophysical environments. Two examples are SiO masering in circumstellar envelopes in certain late-type stars [1] and the vibrational populations of molecular hydrogen in shocked interstellar medium [cf 2]. A simple exponential-potential model of molecular collisions leads to a two-parameter analytic expression for state-to-state and thermally averaged rates for collisionally induced vibrational-translational (VT) transitions in diatomic molecules [3,4]. The thermally averaged rates predicted by this formula have been shown to be in excellent numerical agreement with absolute experimental and quantum mechanical rates over large temperature ranges and initial vibrational excitation levels in a variety of species, e.g., OH, O2, N2 [3] and even for the rate of H2(v=1)+H2, which changes by five orders of magnitude in the temperature range 50-2000 K [4]. Analogous analytic rates will be reported for vibrational transitions in SiO due to collisions with H2 and compared to the numerical fit of quantum-mechanical rates calculated by Bieniek and Green [5]. [1] Palov, A.P., Gray, M.D., Field, D., & Balint-Kurti, G.G. 2006, ApJ, 639, 204. [2] Flower, D. 2007, Molecular Collisions in the Interstellar Medium (Cambridge: Cambridge Univ. Press) [3] Bieniek, R.J. & Lipson, S.J. 1996, Chem. Phys. Lett. 263, 276. [4] Bieniek, R.J. 2006, Proc. NASA LAW (Lab. Astrophys. Workshop) 2006, 299; http://www.physics.unlv.edu/labastro/nasalaw2006proceedings.pdf. [5] Bieniek, R.J., & Green, S. 1983, ApJ, 265, L29 and 1983, ApJ, 270, L101.

  12. Projecting global land-use change and its effect on ecosystem service provision and biodiversity with simple models.

    PubMed

    Nelson, Erik; Sander, Heather; Hawthorne, Peter; Conte, Marc; Ennaanay, Driss; Wolny, Stacie; Manson, Steven; Polasky, Stephen

    2010-12-15

    As the global human population grows and its consumption patterns change, additional land will be needed for living space and agricultural production. A critical question facing global society is how to meet growing human demands for living space, food, fuel, and other materials while sustaining ecosystem services and biodiversity [1]. We spatially allocate two scenarios of 2000 to 2015 global areal change in urban land and cropland at the grid cell-level and measure the impact of this change on the provision of ecosystem services and biodiversity. The models and techniques used to spatially allocate land-use/land-cover (LULC) change and evaluate its impact on ecosystems are relatively simple and transparent [2]. The difference in the magnitude and pattern of cropland expansion across the two scenarios engenders different tradeoffs among crop production, provision of species habitat, and other important ecosystem services such as biomass carbon storage. For example, in one scenario, 5.2 grams of carbon stored in biomass is released for every additional calorie of crop produced across the globe; under the other scenario this tradeoff rate is 13.7. By comparing scenarios and their impacts we can begin to identify the global pattern of cropland and irrigation development that is significant enough to meet future food needs but has less of an impact on ecosystem service and habitat provision. Urban area and croplands will expand in the future to meet human needs for living space, livelihoods, and food. In order to jointly provide desired levels of urban land, food production, and ecosystem service and species habitat provision the global society will have to become much more strategic in its allocation of intensively managed land uses. Here we illustrate a method for quickly and transparently evaluating the performance of potential global futures.

  13. Numerical prediction of fire resistance of RC beams

    NASA Astrophysics Data System (ADS)

    Serega, Szymon; Wosatko, Adam

    2018-01-01

    Fire resistance of different structural members is an important issue of their strength and durability. A simple but effective tool to investigate multi-span reinforced concrete beams exposed to fire is discussed in the paper. Assumptions and simplifications of the theory as well as numerical aspects are briefly reviewed. Two steps of nonlinear finite element analysis and two levels of observation are distinguished. The first step is the solution of transient heat transfer problem in representative two-dimensional reinforced concrete cross-section of a beam. The second part is a nonlinear mechanical analysis of the whole beam. All spans are uniformly loaded, but an additional time-dependent thermal load due to fire acts on selected ones. Global changes of curvature and bending moment functions induce deterioration of the stiffness. Benchmarks are shown to confirm the correctness of the model.

  14. A Simple Model for a SARS Epidemic

    ERIC Educational Resources Information Center

    Ang, Keng Cheng

    2004-01-01

    In this paper, we examine the use of an ordinary differential equation in modelling the SARS outbreak in Singapore. The model provides an excellent example of using mathematics in a real life situation. The mathematical concepts involved are accessible to students with A level Mathematics backgrounds. Data for the SARS epidemic in Singapore are…

  15. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  16. Testing the Two-Layer Model for Correcting Clear Sky Reflectance near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, Frank; Varnai, Tamas; Levy, Rob

    2015-01-01

    A two-layer model (2LM) was developed in our earlier studies to estimate the clear sky reflectance enhancement due to cloud-molecular radiative interaction at MODIS at 0.47 micrometers. Recently, we extended the model to include cloud-surface and cloud-aerosol radiative interactions. We use the LES/SHDOM simulated 3D true radiation fields to test the 2LM for reflectance enhancement at 0.47 micrometers. We find: The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; the cloud-molecular interaction alone accounts for 70 percent of the enhancement; the cloud-surface interaction accounts for 16 percent of the enhancement; the cloud-aerosol interaction accounts for an additional 13 percent of the enhancement. We conclude that the 2LM is simple to apply and unbiased.

  17. Ferromagnetism in the Hubbard Model with a Gapless Nearly-Flat Band

    NASA Astrophysics Data System (ADS)

    Tanaka, Akinori

    2018-01-01

    We present a version of the Hubbard model with a gapless nearly-flat lowest band which exhibits ferromagnetism in two or more dimensions. The model is defined on a lattice obtained by placing a site on each edge of the hypercubic lattice, and electron hopping is assumed to be only between nearest and next nearest neighbor sites. The lattice, where all the sites are identical, is simple, and the corresponding single-electron band structure, where two cosine-type bands touch without an energy gap, is also simple. We prove that the ground state of the model is unique and ferromagnetic at half-filling of the lower band, if the lower band is nearly flat and the strength of on-site repulsion is larger than a certain value which is independent of the lattice size. This is the first example of ferromagnetism in three dimensional non-singular models with a gapless band structure.

  18. Estimating the Soil Temperature Profile from a Single Depth Observation: A Simple Empirical Heatflow Solution

    NASA Technical Reports Server (NTRS)

    Holmes, Thomas; Owe, Manfred; deJeu, Richard

    2007-01-01

    Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.

  19. Alternative-splicing-mediated gene expression

    NASA Astrophysics Data System (ADS)

    Wang, Qianliang; Zhou, Tianshou

    2014-01-01

    Alternative splicing (AS) is a fundamental process during gene expression and has been found to be ubiquitous in eukaryotes. However, how AS impacts gene expression levels both quantitatively and qualitatively remains to be fully explored. Here, we analyze two common models of gene expression, each incorporating a simple splice mechanism that a pre-mRNA is spliced into two mature mRNA isoforms in a probabilistic manner. In the constitutive expression case, we show that the steady-state molecular numbers of two mature mRNA isoforms follow mutually independent Poisson distributions. In the bursting expression case, we demonstrate that the tail decay of the steady-state distribution for both mature mRNA isoforms that in general are not mutually independent can be characterized by the product of mean burst size and splicing probability. In both cases, we find that AS can efficiently modulate both the variability (measured by variance) and the noise level of the total mature mRNA, and in particular, the latter is always lower than the noise level of the pre-mRNA, implying that AS always reduces the noise. These results altogether reveal that AS is a mechanism of efficiently controlling the gene expression noise.

  20. Some research perspectives in galloping phenomena: critical conditions and post-critical behavior

    NASA Astrophysics Data System (ADS)

    Piccardo, Giuseppe; Pagnini, Luisa Carlotta; Tubino, Federica

    2015-01-01

    This paper gives an overview of wind-induced galloping phenomena, describing its manifold features and the many advances that have taken place in this field. Starting from a quasi-steady model of aeroelastic forces exerted by the wind on a rigid cylinder with three degree-of-freedom, two translations and a rotation in the plane of the model cross section, the fluid-structure interaction forces are described in simple terms, yet suitable with complexity of mechanical systems, both in the linear and in the nonlinear field, thus allowing investigation of a wide range of structural typologies and their dynamic behavior. The paper is driven by some key concerns. A great effort is made in underlying strengths and weaknesses of the classic quasi-steady theory as well as of the simplistic assumptions that are introduced in order to investigate such complex phenomena through simple engineering models. A second aspect, which is crucial to the authors' approach, is to take into account and harmonize the engineering, physical and mathematical perspectives in an interdisciplinary way—something which does not happen often. The authors underline that the quasi-steady approach is an irreplaceable tool, tough approximate and simple, for performing engineering analyses; at the same time, the study of this phenomenon gives origin to numerous problems that make the application of high-level mathematical solutions particularly attractive. Finally, the paper discusses a wide range of features of the galloping theory and its practical use which deserve further attention and refinements, pointing to the great potential represented by new fields of application and advanced analysis tools.

  1. Easy-to-use software tools for teaching the basics, design and applications of optical components and systems

    NASA Astrophysics Data System (ADS)

    Gerhard, Christoph; Adams, Geoff

    2015-10-01

    Geometric optics is at the heart of optics teaching. Some of us may remember using pins and string to test the simple lens equation at school. Matters get more complex at undergraduate/postgraduate levels as we are introduced to paraxial rays, real rays, wavefronts, aberration theory and much more. Software is essential for the later stages, and the right software can profitably be used even at school. We present two free PC programs, which have been widely used in optics teaching, and have been further developed in close cooperation with lecturers/professors in order to address the current content of the curricula for optics, photonics and lasers in higher education. PreDesigner is a single thin lens modeller. It illustrates the simple lens law with construction rays and then allows the user to include field size and aperture. Sliders can be used to adjust key values with instant graphical feedback. This tool thus represents a helpful teaching medium for the visualization of basic interrelations in optics. WinLens3DBasic can model multiple thin or thick lenses with real glasses. It shows the system focii, principal planes, nodal points, gives paraxial ray trace values, details the Seidel aberrations, offers real ray tracing and many forms of analysis. It is simple to reverse lenses and model tilts and decenters. This tool therefore provides a good base for learning lens design fundamentals. Much work has been put into offering these features in ways that are easy to use, and offer opportunities to enhance the student's background understanding.

  2. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  3. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  4. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  5. A molecule-centered method for accelerating the calculation of hydrodynamic interactions in Brownian dynamics simulations containing many flexible biomolecules

    PubMed Central

    Elcock, Adrian H.

    2013-01-01

    Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146

  6. A simple rainfall-runoff model based on hydrological units applied to the Teba catchment (south-east Spain)

    NASA Astrophysics Data System (ADS)

    Donker, N. H. W.

    2001-01-01

    A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.

  7. Development and Interlaboratory Validation of a Simple Screening Method for Genetically Modified Maize Using a ΔΔC(q)-Based Multiplex Real-Time PCR Assay.

    PubMed

    Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko

    2016-04-19

    A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.

  8. FARSITE: Fire Area Simulator-model development and evaluation

    Treesearch

    Mark A. Finney

    1998-01-01

    A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.

  9. Stratospheric General Circulation with Chemistry Model (SGCCM)

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.

    1990-01-01

    In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).

  10. Simulation model calibration and validation : phase II : development of implementation handbook and short course.

    DOT National Transportation Integrated Search

    2006-01-01

    A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...

  11. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    NASA Astrophysics Data System (ADS)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  12. Contamination of water supplies by volcanic ashfall: A literature review and simple impact modelling

    NASA Astrophysics Data System (ADS)

    Stewart, C.; Johnston, D. M.; Leonard, G. S.; Horwell, C. J.; Thordarson, T.; Cronin, S. J.

    2006-11-01

    Volcanic ash is the most widely-distributed product of explosive volcanic eruptions, and can disrupt vital infrastructure on a large scale. Previous studies of effects of ashfall on natural waters and water supplies have focused mainly on the consequences of increased levels of turbidity (ash suspended in water), acidity and fluoride, with very little attention paid to other contaminants associated with volcanic ash. The aims of this paper are twofold: firstly, to review previous studies of the effects of volcanic ashfall on water supplies and identify information gaps; and secondly, to propose a simple model for predicting effects of ashfall on water supplies using available information on ash composition. We reviewed reported impacts of historic eruptions on water supplies, drawing on case studies from New Zealand, Vanuatu, Argentina, the USA, Costa Rica, Montserrat, Iceland and Guadeloupe. Elevated concentrations of fluoride, iron, sulphate and chloride, as well as turbidity and acidity, have been reported in water supplies. From a public health perspective, the two main issues appear to be: (1) outbreaks of infectious disease caused by the inhibition of disinfection by high levels of suspended ash, and (2) elevated fluoride concentrations. We devised a simple model using volcanic ash leachate composition data to predict effects on receiving waters. Applying this model to the effects of Ruapehu ash, from the 1995/1996 eruptions, suggests that the primary effects of concern are likely to be an increase in acidity (decrease in pH), and increases in concentrations of the metals aluminium, iron and manganese. These metals are not normally considered to pose health risks, and are regulated only by secondary, non-enforceable guidelines. However, exceedences of guideline values for Al, Mn, Fe and pH will cause water to become undrinkable due to a bitter metallic taste and dark colour, and may also cause corrosion, staining and scale deposition problems in water tanks and pipes. Therefore, the main issues following volcanic ashfall of similar composition to Ruapehu ash are likely to be shortages of potable water and damage to distribution systems, rather than risks to public health.

  13. Development of a non-contextual model for determining the autonomy level of intelligent unmanned systems

    NASA Astrophysics Data System (ADS)

    Durst, Phillip J.; Gray, Wendell; Trentini, Michael

    2013-05-01

    A simple, quantitative measure for encapsulating the autonomous capabilities of unmanned systems (UMS) has yet to be established. Current models for measuring a UMS's autonomy level require extensive, operational level testing, and provide a means for assessing the autonomy level for a specific mission/task and operational environment. A more elegant technique for quantifying autonomy using component level testing of the robot platform alone, outside of mission and environment contexts, is desirable. Using a high level framework for UMS architectures, such a model for determining a level of autonomy has been developed. The model uses a combination of developmental and component level testing for each aspect of the UMS architecture to define a non-contextual autonomous potential (NCAP). The NCAP provides an autonomy level, ranging from fully non- autonomous to fully autonomous, in the form of a single numeric parameter describing the UMS's performance capabilities when operating at that level of autonomy.

  14. Phase-field crystal modeling of heteroepitaxy and exotic modes of crystal nucleation

    NASA Astrophysics Data System (ADS)

    Podmaniczky, Frigyes; Tóth, Gyula I.; Tegze, György; Pusztai, Tamás; Gránásy, László

    2017-01-01

    We review recent advances made in modeling heteroepitaxy, two-step nucleation, and nucleation at the growth front within the framework of a simple dynamical density functional theory, the Phase-Field Crystal (PFC) model. The crystalline substrate is represented by spatially confined periodic potentials. We investigate the misfit dependence of the critical thickness in the StranskiKrastanov growth mode in isothermal studies. Apparently, the simulation results for stress release via the misfit dislocations fit better to the PeopleBean model than to the one by Matthews and Blakeslee. Next, we investigate structural aspects of two-step crystal nucleation at high undercoolings, where an amorphous precursor forms in the first stage. Finally, we present results for the formation of new grains at the solid-liquid interface at high supersaturations/supercoolings, a phenomenon termed Growth Front Nucleation (GFN). Results obtained with diffusive dynamics (applicable to colloids) and with a hydrodynamic extension of the PFC theory (HPFC, developed for simple liquids) will be compared. The HPFC simulations indicate two possible mechanisms for GFN.

  15. On the validity of the arithmetic-geometric mean method to locate the optimal solution in a supply chain system

    NASA Astrophysics Data System (ADS)

    Chung, Kun-Jen

    2012-08-01

    Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.

  16. A new hybrid model for filling gaps and forecast in sea level: application to the eastern English Channel and the North Atlantic Sea (western France)

    NASA Astrophysics Data System (ADS)

    Turki, Imen; Laignel, Benoit; Kakeh, Nabil; Chevalier, Laetitia; Costa, Stephane

    2015-04-01

    This research is carried out in the framework of the program Surface Water and Ocean Topography (SWOT) which is a partnership between NASA and CNES. Here, a new hybrid model is implemented for filling gaps and forecasting the hourly sea level variability by combining classical harmonic analyses to high statistical methods to reproduce the deterministic and stochastic processes, respectively. After simulating the mean trend sea level and astronomical tides, the nontidal residual surges are investigated using an autoregressive moving average (ARMA) methods by two ways: (1) applying a purely statistical approach and (2) introducing the SLP in ARMA as a main physical process driving the residual sea level. The new hybrid model is applied to the western Atlantic sea and the eastern English Channel. Using ARMA model and considering the SLP, results show that the hourly sea level observations of gauges with are well reproduced with a root mean square error (RMSE) ranging between 4.5 and 7 cm for 1 to 30 days of gaps and an explained variance more than 80 %. For larger gaps of months, the RMSE reaches 9 cm. The negative and the positive extreme values of sea levels are also well reproduced with a mean explained variance between 70 and 85 %. The statistical behavior of 1-year modeled residual components shows good agreements with observations. The frequency analysis using the discrete wavelet transform illustrate strong correlations between observed and modeled energy spectrum and the bands of variability. Accordingly, the proposed model presents a coherent, simple, and easy tool to estimate the total sea level at timescales from days to months. The ARMA model seems to be more promising for filling gaps and estimating the sea level at larger scales of years by introducing more physical processes driving its stochastic variability.

  17. Role of community tolerance level (CTL) in predicting the prevalence of the annoyance of road and rail noise.

    PubMed

    Schomer, Paul; Mestre, Vincent; Fidell, Sanford; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy

    2012-04-01

    Fidell et al. [(2011), J. Acoust. Soc. Am. 130(2), 791-806] have shown (1) that the rate of growth of annoyance with noise exposure reported in attitudinal surveys of the annoyance of aircraft noise closely resembles the exponential rate of change of loudness with sound level, and (2) that the proportion of a community highly annoyed and the variability in annoyance prevalence rates in communities are well accounted for by a simple model with a single free parameter: a community tolerance level (abbreviated CTL, and represented symbolically in mathematical expressions as L(ct)), expressed in units of DNL. The current study applies the same modeling approach to predicting the prevalence of annoyance of road traffic and rail noise. The prevalence of noise-induced annoyance of all forms of transportation noise is well accounted for by a simple, loudness-like exponential function with community-specific offsets. The model fits all of the road traffic findings well, but the prevalence of annoyance due to rail noise is more accurately predicted separately for interviewing sites with and without high levels of vibration and/or rattle.

  18. A cognitive-consistency based model of population wide attitude change.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lakkaraju, Kiran; Speed, Ann Elizabeth

    Attitudes play a significant role in determining how individuals process information and behave. In this paper we have developed a new computational model of population wide attitude change that captures the social level: how individuals interact and communicate information, and the cognitive level: how attitudes and concept interact with each other. The model captures the cognitive aspect by representing each individuals as a parallel constraint satisfaction network. The dynamics of this model are explored through a simple attitude change experiment where we vary the social network and distribution of attitudes in a population.

  19. Synapse fits neuron: joint reduction by model inversion.

    PubMed

    van der Scheer, H T; Doelman, A

    2017-08-01

    In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.

  20. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.

  1. Biodiversity maintenance in food webs with regulatory environmental feedbacks.

    PubMed

    Bagdassarian, Carey K; Dunham, Amy E; Brown, Christopher G; Rauscher, Daniel

    2007-04-21

    Although the food web is one of the most fundamental and oldest concepts in ecology, elucidating the strategies and structures by which natural communities of species persist remains a challenge to empirical and theoretical ecologists. We show that simple regulatory feedbacks between autotrophs and their environment when embedded within complex and realistic food-web models enhance biodiversity. The food webs are generated through the niche-model algorithm and coupled with predator-prey dynamics, with and without environmental feedbacks at the autotroph level. With high probability and especially at lower, more realistic connectance levels, regulatory environmental feedbacks result in fewer species extinctions, that is, in increased species persistence. These same feedback couplings, however, also sensitize food webs to environmental stresses leading to abrupt collapses in biodiversity with increased forcing. Feedback interactions between species and their material environments anchor food-web persistence, adding another dimension to biodiversity conservation. We suggest that the regulatory features of two natural systems, deep-sea tubeworms with their microbial consortia and a soil ecosystem manifesting adaptive homeostatic changes, can be embedded within niche-model food-web dynamics.

  2. Mechanics of train collision

    DOT National Transportation Integrated Search

    1976-04-30

    A simple and a more detailed mathematical model for the simulation of train collisions are presented. The study presents considerable insight as to the causes and consequences of train motions on impact. Comparison of model predictions with two full ...

  3. A simple simulation model as a tool to assess alternative health care provider payment reform options in Vietnam.

    PubMed

    Cashin, Cheryl; Phuong, Nguyen Khanh; Shain, Ryan; Oanh, Tran Thi Mai; Thuy, Nguyen Thi

    2015-01-01

    Vietnam is currently considering a revision of its 2008 Health Insurance Law, including the regulation of provider payment methods. This study uses a simple spreadsheet-based, micro-simulation model to analyse the potential impacts of different provider payment reform scenarios on resource allocation across health care providers in three provinces in Vietnam, as well as on the total expenditure of the provincial branches of the public health insurance agency (Provincial Social Security [PSS]). The results show that currently more than 50% of PSS spending is concentrated at the provincial level with less than half at the district level. There is also a high degree of financial risk on district hospitals with the current fund-holding arrangement. Results of the simulation model show that several alternative scenarios for provider payment reform could improve the current payment system by reducing the high financial risk currently borne by district hospitals without dramatically shifting the current level and distribution of PSS expenditure. The results of the simulation analysis provided an empirical basis for health policy-makers in Vietnam to assess different provider payment reform options and make decisions about new models to support health system objectives.

  4. Simulation of C. elegans thermotactic behavior in a linear thermal gradient using a simple phenomenological motility model.

    PubMed

    Matsuoka, Tomohiro; Gomi, Sohei; Shingai, Ryuzo

    2008-01-21

    The nematode Caenorhabditis elegans has been reported to exhibit thermotaxis, a sophisticated behavioral response to temperature. However, there appears to be some inconsistency among previous reports. The results of population-level thermotaxis investigations suggest that C. elegans can navigate to the region of its cultivation temperature from nearby regions of higher or lower temperature. However, individual C. elegans nematodes appear to show only cryophilic tendencies above their cultivation temperature. A Monte-Carlo style simulation using a simple individual model of C. elegans provides insight into clarifying apparent inconsistencies among previous findings. The simulation using the thermotaxis model that includes the cryophilic tendencies, isothermal tracking and thermal adaptation was conducted. As a result of the random walk property of locomotion of C. elegans, only cryophilic tendencies above the cultivation temperature result in population-level thermophilic tendencies. Isothermal tracking, a period of active pursuit of an isotherm around regions of temperature near prior cultivation temperature, can strengthen the tendencies of these worms to gather around near-cultivation-temperature regions. A statistical index, the thermotaxis (TTX) L-skewness, was introduced and was useful in analyzing the population-level thermotaxis of model worms.

  5. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  6. A coordination theory for intelligent machines

    NASA Technical Reports Server (NTRS)

    Wang, Fei-Yue; Saridis, George N.

    1990-01-01

    A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.

  7. Theoretical and Experimental Aspects of Acoustic Modelling of Engine Exhaust Systems with Applications to a Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Sridhara, Basavapatna Sitaramaiah

    In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.

  8. Protein degradation rate is the dominant mechanism accounting for the differences in protein abundance of basal p53 in a human breast and colorectal cancer cell line.

    PubMed

    Lakatos, Eszter; Salehi-Reyhani, Ali; Barclay, Michael; Stumpf, Michael P H; Klug, David R

    2017-01-01

    We determine p53 protein abundances and cell to cell variation in two human cancer cell lines with single cell resolution, and show that the fractional width of the distributions is the same in both cases despite a large difference in average protein copy number. We developed a computational framework to identify dominant mechanisms controlling the variation of protein abundance in a simple model of gene expression from the summary statistics of single cell steady state protein expression distributions. Our results, based on single cell data analysed in a Bayesian framework, lends strong support to a model in which variation in the basal p53 protein abundance may be best explained by variations in the rate of p53 protein degradation. This is supported by measurements of the relative average levels of mRNA which are very similar despite large variation in the level of protein.

  9. Exploring Differential Effects Across Two Decoding Treatments on Item-Level Transfer in Children with Significant Word Reading Difficulties: A New Approach for Testing Intervention Elements.

    PubMed

    Steacy, Laura M; Elleman, Amy M; Lovett, Maureen W; Compton, Donald L

    2016-01-01

    In English, gains in decoding skill do not map directly onto increases in word reading. However, beyond the Self-Teaching Hypothesis (Share, 1995), little is known about the transfer of decoding skills to word reading. In this study, we offer a new approach to testing specific decoding elements on transfer to word reading. To illustrate, we modeled word-reading gains among children with reading disability (RD) enrolled in Phonological and Strategy Training (PHAST) or Phonics for Reading (PFR). Conditions differed in sublexical training with PHAST stressing multi-level connections and PFR emphasizing simple grapheme-phoneme correspondences. Thirty-seven children with RD, 3 rd - 6 th grade, were randomly assigned 60 lessons of PHAST or PFR. Crossed random-effects models allowed us to identify specific intervention elements that differentially impacted word-reading performance at posttest, with children in PHAST better able to read words with variant vowel pronunciations. Results suggest that sublexical emphasis influences transfer gains to word reading.

  10. Bioheat model evaluations of laser effects on tissues: role of water evaporation and diffusion

    NASA Astrophysics Data System (ADS)

    Nagulapally, Deepthi; Joshi, Ravi P.; Thomas, Robert J.

    2011-03-01

    A two-dimensional, time-dependent bioheat model is applied to evaluate changes in temperature and water content in tissues subjected to laser irradiation. Our approach takes account of liquid-to-vapor phase changes and a simple diffusive flow of water within the biotissue. An energy balance equation considers blood perfusion, metabolic heat generation, laser absorption, and water evaporation. The model also accounts for the water dependence of tissue properties (both thermal and optical), and variations in blood perfusion rates based on local tissue injury. Our calculations show that water diffusion would reduce the local temperature increases and hot spots in comparison to simple models that ignore the role of water in the overall thermal and mass transport. Also, the reduced suppression of perfusion rates due to tissue heating and damage with water diffusion affect the necrotic depth. Two-dimensional results for the dynamic temperature, water content, and damage distributions will be presented for skin simulations. It is argued that reduction in temperature gradients due to water diffusion would mitigate local refractive index variations, and hence influence the phenomenon of thermal lensing. Finally, simple quantitative evaluations of pressure increases within the tissue due to laser absorption are presented.

  11. Dielectric properties of calicum and barium-doped strontium titanate

    NASA Astrophysics Data System (ADS)

    Tung, Li-Chun

    Dielectric properties of high quality polycrystalline Ca- and Ba-doped SrTiO3 perovskites are studied by means of dielectric constant, dielectric loss and ferroelectric hysteresis measurements. Low frequency dispersion of the dielectric constant is found to be very small and a simple relaxor model may not be able to explain its dielectric behavior. Relaxation modes are found in these samples, and they are all interpreted as thermally activated Bipolar re-orientation across energy barriers. In Sr1- xCaxTiO3 (x = 0--0.3), two modes are found associated with different relaxation processes, and the concentration dependence implies a competition between these processes. In Sr1-xBa xTiO3 (x = 0--0.25), relaxation modes are found to be related to the structural transitions, and the relaxation modes persist at low doping levels (x < 0.1), where structural ordering is not observed by previous neutron scattering studies. The validity of well-accepted Barret formula is discussed and two of the well-accepted models, anharmonic oscillator model and transverse Ising model, are found to be equivalent. Both of the Ca and Ba systems can be understood qualitatively within the concept of transverse Ising model.

  12. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    PubMed

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  13. Tuning of PID controllers for boiler-turbine units.

    PubMed

    Tan, Wen; Liu, Jizhen; Fang, Fang; Chen, Yanqiao

    2004-10-01

    A simple two-by-two model for a boiler-turbine unit is demonstrated in this paper. The model can capture the essential dynamics of a unit. The design of a coordinated controller is discussed based on this model. A PID control structure is derived, and a tuning procedure is proposed. The examples show that the method is easy to apply and can achieve acceptable performance.

  14. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  15. Differences in aquatic habitat quality as an impact of one- and two-dimensional hydrodynamic model simulated flow variables

    NASA Astrophysics Data System (ADS)

    Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.

    2013-12-01

    Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches

  16. Collective motion of groups of self-propelled particles following interacting leaders

    NASA Astrophysics Data System (ADS)

    Ferdinandy, B.; Ozogány, K.; Vicsek, T.

    2017-08-01

    In order to keep their cohesiveness during locomotion gregarious animals must make collective decisions. Many species boast complex societies with multiple levels of communities. A common case is when two dominant levels exist, one corresponding to leaders and the other consisting of followers. In this paper we study the collective motion of such two-level assemblies of self-propelled particles. We present a model adapted from one originally proposed to describe the movement of cells resulting in a smoothly varying coherent motion. We shall use the terminology corresponding to large groups of some mammals where leaders and followers form a group called a harem. We study the emergence (self-organization) of sub-groups within a herd during locomotion by computer simulations. The resulting processes are compared with our prior observations of a Przewalski horse herd (Hortobágy, Hungary) which we use as results from a published case study. We find that the model reproduces key features of a herd composed of harems moving on open ground, including fights for followers between leaders and bachelor groups (group of leaders without followers). One of our findings, however, does not agree with the observations. While in our model the emerging group size distribution is normal, the group size distribution of the observed herd based on historical data have been found to follow lognormal distribution. We argue that this indicates that the formation (and the size) of the harems must involve a more complex social topology than simple spatial-distance based interactions.

  17. A global model for steady state and transient S.I. engine heat transfer studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohac, S.V.; Assanis, D.N.; Baker, D.M.

    1996-09-01

    A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less

  18. Design Through Manufacturing: The Solid Model - Finite Element Analysis Interface

    NASA Technical Reports Server (NTRS)

    Rubin, Carol

    2003-01-01

    State-of-the-art computer aided design (CAD) presently affords engineers the opportunity to create solid models of machine parts which reflect every detail of the finished product. Ideally, these models should fulfill two very important functions: (1) they must provide numerical control information for automated manufacturing of precision parts, and (2) they must enable analysts to easily evaluate the stress levels (using finite element analysis - FEA) for all structurally significant parts used in space missions. Today's state-of-the-art CAD programs perform function (1) very well, providing an excellent model for precision manufacturing. But they do not provide a straightforward and simple means of automating the translation from CAD to FEA models, especially for aircraft-type structures. The research performed during the fellowship period investigated the transition process from the solid CAD model to the FEA stress analysis model with the final goal of creating an automatic interface between the two. During the period of the fellowship a detailed multi-year program for the development of such an interface was created. The ultimate goal of this program will be the development of a fully parameterized automatic ProE/FEA translator for parts and assemblies, with the incorporation of data base management into the solution, and ultimately including computational fluid dynamics and thermal modeling in the interface.

  19. Apparent Transition Behavior of Widely-Used Turbulence Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2006-01-01

    The Spalart-Allmaras and the Menter SST kappa-omega turbulence models are shown to have the undesirable characteristic that, for fully turbulent computations, a transition region can occur whose extent varies with grid density. Extremely fine two-dimensional grids over the front portion of an airfoil are used to demonstrate the effect. As the grid density is increased, the laminar region near the nose becomes larger. In the Spalart-Allmaras model this behavior is due to convergence to a laminar-behavior fixed point that occurs in practice when freestream turbulence is below some threshold. It is the result of a feature purposefully added to the original model in conjunction with a special trip function. This degenerate fixed point can also cause nonuniqueness regarding where transition initiates on a given grid. Consistent fully turbulent results can easily be achieved by either using a higher freestream turbulence level or by making a simple change to one of the model constants. Two-equation kappa-omega models, including the SST model, exhibit strong sensitivity to numerical resolution near the area where turbulence initiates. Thus, inconsistent apparent transition behavior with grid refinement in this case does not appear to stem from the presence of a degenerate fixed point. Rather, it is a fundamental property of the kappa-omega model itself, and is not easily remedied.

  20. Apparent Transition Behavior of Widely-Used Turbulence Models

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2007-01-01

    The Spalart-Allmaras and the Menter SST k-omega turbulence models are shown to have the undesirable characteristic that, for fully turbulent computations, a transition region can occur whose extent varies with grid density. Extremely fine two-dimensional grids over the front portion of an airfoil are used to demonstrate the effect. As the grid density is increased, the laminar region near the nose becomes larger. In the Spalart-Allmaras model this behavior is due to convergence to a laminar-behavior fixed point that occurs in practice when freestream turbulence is below some threshold. It is the result of a feature purposefully added to the original model in conjunction with a special trip function. This degenerate fixed point can also cause non-uniqueness regarding where transition initiates on a given grid. Consistent fully turbulent results can easily be achieved by either using a higher freestream turbulence level or by making a simple change to one of the model constants. Two-equation k-omega models, including the SST model, exhibit strong sensitivity to numerical resolution near the area where turbulence initiates. Thus, inconsistent apparent transition behavior with grid refinement in this case does not appear to stem from the presence of a degenerate fixed point. Rather, it is a fundamental property of the k-omega model itself, and is not easily remedied.

  1. Modelling Simple Experimental Platform for In Vitro Study of Drug Elution from Drug Eluting Stents (DES)

    NASA Astrophysics Data System (ADS)

    Kalachev, L. V.

    2016-06-01

    We present a simple model of experimental setup for in vitro study of drug release from drug eluting stents and drug propagation in artificial tissue samples representing blood vessels. The model is further reduced using the assumption on vastly different characteristic diffusion times in the stent coating and in the artificial tissue. The model is used to derive a relationship between the times at which the measurements have to be taken for two experimental platforms, with corresponding artificial tissue samples made of different materials with different drug diffusion coefficients, to properly compare the drug release characteristics of drug eluting stents.

  2. Some Simple Formulas for Posterior Convergence Rates

    PubMed Central

    2014-01-01

    We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278

  3. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  4. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  5. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  6. Variations in thermospheric composition: A model based on mass-spectrometer and satellite-drag data

    NASA Technical Reports Server (NTRS)

    Jacchia, L. G.

    1973-01-01

    The seasonal-latitudinal and the diurnal variations of composition observed by mass spectrometers on the OGO 6 satellite are represented by two simple empirical formulae, each of which uses only one numerical parameter. The formulae are of a very general nature and predict the behavior of these variations at all heights and for all levels of solar activity; they yield a satisfactory representation of the corresponding variations in total density as derived from satellite drag. It is suggested that a seasonal variation of hydrogen might explain the abnormally low hydrogen densities at high northern latitudes in July 1964.

  7. A direct method for calculating instrument noise levels in side-by-side seismometer evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1989-01-01

    The subject of determining the inherent system noise levels present in modem broadband closed loop seismic sensors has been an evolving topic ever since closed loop systems became available. Closed loop systems are unique in that the system noise can not be determined via a blocked mass test as in older conventional open loop seismic sensors. Instead, most investigators have resorted to performing measurements on two or more systems operating in close proximity to one another and to analyzing the outputs of these systems with respect to one another to ascertain their relative noise levels.The analysis of side-by-side relative performance is inherently dependent on the accuracy of the mathematical modeling of the test configuration. This report presents a direct approach to extracting the system noise levels of two linear systems with a common coherent input signal. The mathematical solution to the problem is incredibly simple; however the practical application of the method encounters some difficulties. Examples of expected accuracies are presented as derived by simulating real systems performance using computer generated random noise. In addition, examples of the performance of the method when applied to real experimental test data are shown.

  8. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  9. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  10. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  11. Keep Your Distance! Using Second-Order Ordinary Differential Equations to Model Traffic Flow

    ERIC Educational Resources Information Center

    McCartney, Mark

    2004-01-01

    A simple mathematical model for how vehicles follow each other along a stretch of road is presented. The resulting linear second-order differential equation with constant coefficients is solved and interpreted. The model can be used as an application of solution techniques taught at first-year undergraduate level and as a motivator to encourage…

  12. Using Supply, Demand, and the Cournot Model to Understand Corruption

    ERIC Educational Resources Information Center

    Hayford, Marc D.

    2007-01-01

    The author combines the supply and demand model of taxes with a Cournot model of bribe takers to develop a simple and useful framework for understanding the effect of corruption on economic activity. There are many examples of corruption in both developed and developing countries. Because corruption decreases the level of economic activity and…

  13. A new robust control scheme using second order sliding mode and fuzzy logic of a DFIM supplied by two five-level SVPWM inverters

    NASA Astrophysics Data System (ADS)

    Boudjema, Zinelaabidine; Taleb, Rachid; Bounadja, Elhadj

    2017-02-01

    Traditional filed oriented control strategy including proportional-integral (PI) regulator for the speed drive of the doubly fed induction motor (DFIM) have some drawbacks such as parameter tuning complications, mediocre dynamic performances and reduced robustness. Therefore, based on the analysis of the mathematical model of a DFIM supplied by two five-level SVPWM inverters, this paper proposes a new robust control scheme based on super twisting sliding mode and fuzzy logic. The conventional sliding mode control (SMC) has vast chattering effect on the electromagnetic torque developed by the DFIM. In order to resolve this problem, a second order sliding mode technique based on super twisting algorithm and fuzzy logic functions is employed. The validity of the employed approach was tested by using Matlab/Simulink software. Interesting simulation results were obtained and remarkable advantages of the proposed control scheme were exposed including simple design of the control system, reduced chattering as well as the other advantages.

  14. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    NASA Astrophysics Data System (ADS)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  15. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Vincent K., E-mail: vincent.shen@nist.gov; Siderius, Daniel W.

    2014-06-28

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phasemore » transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called “breathing” of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.« less

  16. Elucidating the effects of adsorbent flexibility on fluid adsorption using simple models and flat-histogram sampling methods

    NASA Astrophysics Data System (ADS)

    Shen, Vincent K.; Siderius, Daniel W.

    2014-06-01

    Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phase transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called "breathing" of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.

  17. Self-assembly of Archimedean tilings with enthalpically and entropically patchy polygons.

    PubMed

    Millan, Jaime A; Ortiz, Daniel; van Anders, Greg; Glotzer, Sharon C

    2014-03-25

    Considerable progress in the synthesis of anisotropic patchy nanoplates (nanoplatelets) promises a rich variety of highly ordered two-dimensional superlattices. Recent experiments of superlattices assembled from nanoplates confirm the accessibility of exotic phases and motivate the need for a better understanding of the underlying self-assembly mechanisms. Here, we present experimentally accessible, rational design rules for the self-assembly of the Archimedean tilings from polygonal nanoplates. The Archimedean tilings represent a model set of target patterns that (i) contain both simple and complex patterns, (ii) are comprised of simple regular shapes, and (iii) contain patterns with potentially interesting materials properties. Via Monte Carlo simulations, we propose a set of design rules with general applicability to one- and two-component systems of polygons. These design rules, specified by increasing levels of patchiness, correspond to a reduced set of anisotropy dimensions for robust self-assembly of the Archimedean tilings. We show for which tilings entropic patches alone are sufficient for assembly and when short-range enthalpic interactions are required. For the latter, we show how patchy these interactions should be for optimal yield. This study provides a minimal set of guidelines for the design of anisostropic patchy particles that can self-assemble all 11 Archimedean tilings.

  18. Determination of cellulose I crystallinity by FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2009-01-01

    Two new methods based on FT-Raman spectroscopy, one simple, based on band intensity ratio, and the other, using a partial least-squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in semicrystalline cellulose I samples was determined based on univariate regression that was first developed using the...

  19. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  20. New tools for characterizing swarming systems: A comparison of minimal models

    NASA Astrophysics Data System (ADS)

    Huepe, Cristián; Aldana, Maximino

    2008-05-01

    We compare three simple models that reproduce qualitatively the emergent swarming behavior of bird flocks, fish schools, and other groups of self-propelled agents by using a new set of diagnosis tools related to the agents’ spatial distribution. Two of these correspond in fact to different implementations of the same model, which had been previously confused in the literature. All models appear to undergo a very similar order-to-disorder phase transition as the noise level is increased if we only compare the standard order parameter, which measures the degree of agent alignment. When considering our novel quantities, however, their properties are clearly distinguished, unveiling previously unreported qualitative characteristics that help determine which model best captures the main features of realistic swarms. Additionally, we analyze the agent clustering in space, finding that the distribution of cluster sizes is typically exponential at high noise, and approaches a power-law as the noise level is reduced. This trend is sometimes reversed at noise levels close to the phase transition, suggesting a non-trivial critical behavior that could be verified experimentally. Finally, we study a bi-stable regime that develops under certain conditions in large systems. By computing the probability distributions of our new quantities, we distinguish the properties of each of the coexisting metastable states. Our study suggests new experimental analyses that could be carried out to characterize real biological swarms.

  1. A simple integrated assessment approach to global change simulation and evaluation

    NASA Astrophysics Data System (ADS)

    Ogutu, Keroboto; D'Andrea, Fabio; Ghil, Michael

    2016-04-01

    We formulate and study the Coupled Climate-Economy-Biosphere (CoCEB) model, which constitutes the basis of our idealized integrated assessment approach to simulating and evaluating global change. CoCEB is composed of a physical climate module, based on Earth's energy balance, and an economy module that uses endogenous economic growth with physical and human capital accumulation. A biosphere model is likewise under study and will be coupled to the existing two modules. We concentrate on the interactions between the two subsystems: the effect of climate on the economy, via damage functions, and the effect of the economy on climate, via a control of the greenhouse gas emissions. Simple functional forms of the relation between the two subsystems permit simple interpretations of the coupled effects. The CoCEB model is used to make hypotheses on the long-term effect of investment in emission abatement, and on the comparative efficacy of different approaches to abatement, in particular by investing in low carbon technology, in deforestation reduction or in carbon capture and storage (CCS). The CoCEB model is very flexible and transparent, and it allows one to easily formulate and compare different functional representations of climate change mitigation policies. Using different mitigation measures and their cost estimates, as found in the literature, one is able to compare these measures in a coherent way.

  2. Component model reduction via the projection and assembly method

    NASA Technical Reports Server (NTRS)

    Bernard, Douglas E.

    1989-01-01

    The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.

  3. Method of frequency dependent correlations: investigating the variability of total solar irradiance

    NASA Astrophysics Data System (ADS)

    Pelt, J.; Käpylä, M. J.; Olspert, N.

    2017-04-01

    Context. This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites. Aims: We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain. Methods: Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves. Results: We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum. A significant part of the variability can be explained by a very simple model consisting of two components: the original proxy describing blanketing by sunspots, and the low-pass-filtered curve describing the overall activity level. The models with the full library of the different building blocks can be applied to hindcasting with a high level of confidence, Rc ≈ 0.90. The usefulness of these models is limited by the major target controversy. Conclusions: The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.

  4. The Bilingual Advertising Decision.

    ERIC Educational Resources Information Center

    Grin, Francois

    1994-01-01

    Examines the relationship between linguistic plurality and the rationale of advertising decisions. The article presents a simple model of sales to different language groups as a function of the level of advertising in each language, language attitudes, incomes, and an advertising response function. The model is intended as a benchmark, and several…

  5. An Equilibrium Flow Model of a University Campus.

    ERIC Educational Resources Information Center

    Oliver, Robert M.; Hopkins, David S. P.

    This paper develops a simple deterministic model that relates student admissions and enrollments to the final demand for educated students. It includes the effects of dropout rates and student-teacher ratios on student enrollments and faculty staffing levels. Certain technological requirements are assumed known and given. These, as well as the…

  6. A robust collagen scoring method for human liver fibrosis by second harmonic microscopy.

    PubMed

    Guilbert, Thomas; Odin, Christophe; Le Grand, Yann; Gailhouste, Luc; Turlin, Bruno; Ezan, Frédérick; Désille, Yoann; Baffet, Georges; Guyader, Dominique

    2010-12-06

    Second Harmonic Generation (SHG) microscopy offers the opportunity to image collagen of type I without staining. We recently showed that a simple scoring method, based on SHG images of histological human liver biopsies, correlates well with the Metavir assessment of fibrosis level (Gailhouste et al., J. Hepatol., 2010). In this article, we present a detailed study of this new scoring method with two different objective lenses. By using measurements of the objectives point spread functions and of the photomultiplier gain, and a simple model of the SHG intensity, we show that our scoring method, applied to human liver biopsies, is robust to the objective's numerical aperture (NA) for low NA, the choice of the reference sample and laser power, and the spatial sampling rate. The simplicity and robustness of our collagen scoring method may open new opportunities in the quantification of collagen content in different organs, which is of main importance in providing diagnostic information and evaluation of therapeutic efficiency.

  7. Modeling Hidden Circuits: An Authentic Research Experience in One Lab Period

    NASA Astrophysics Data System (ADS)

    Moore, J. Christopher; Rubbo, Louis J.

    2016-10-01

    Two wires exit a black box that has three exposed light bulbs connected together in an unknown configuration. The task for students is to determine the circuit configuration without opening the box. In the activity described in this paper, we navigate students through the process of making models, developing and conducting experiments that can support or falsify models, and confronting ways of distinguishing between two different models that make similar predictions. We also describe a twist that forces students to confront new phenomena, requiring revision of their mental model of electric circuits. This activity is designed to mirror the practice of science by actual scientists and expose students to the "messy" side of science, where our simple explanations of reality often require expansion and/or revision based on new evidence. The purpose of this paper is to present a simple classroom activity within the context of electric circuits that supports students as they learn to test hypotheses and refine and revise models based on evidence.

  8. Membrane interaction of antimicrobial peptides using E. coli lipid extract as model bacterial cell membranes and SFG spectroscopy.

    PubMed

    Soblosky, Lauren; Ramamoorthy, Ayyalusamy; Chen, Zhan

    2015-04-01

    Supported lipid bilayers are used as a convenient model cell membrane system to study biologically important molecule-lipid interactions in situ. However, the lipid bilayer models are often simple and the acquired results with these models may not provide all pertinent information related to a real cell membrane. In this work, we use sum frequency generation (SFG) vibrational spectroscopy to study molecular-level interactions between the antimicrobial peptides (AMPs) MSI-594, ovispirin-1 G18, magainin 2 and a simple 1,2-dipalmitoyl-d62-sn-glycero-3-phosphoglycerol (dDPPG)/1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoglycerol (POPG) bilayer. We compared such interactions to those between the AMPs and a more complex dDPPG/Escherichia coli (E. coli) polar lipid extract bilayer. We show that to fully understand more complex aspects of peptide-bilayer interaction, such as interaction kinetics, a heterogeneous lipid composition is required, such as the E. coli polar lipid extract. The discrepancy in peptide-bilayer interaction is likely due in part to the difference in bilayer charge between the two systems since highly negative charged lipids can promote more favorable electrostatic interactions between the peptide and lipid bilayer. Results presented in this paper indicate that more complex model bilayers are needed to accurately analyze peptide-cell membrane interactions and demonstrates the importance of using an appropriate lipid composition to study AMP interaction properties. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Cryogenic Liquid Level Sensor Apparatus and Method

    NASA Technical Reports Server (NTRS)

    Parker, Allen R., Jr. (Inventor); Richards, W. Lance (Inventor); Piazza, Anthony (Inventor); Man, Hon Chan (Inventor); Bakalyar, John A. (Inventor)

    2015-01-01

    The invention proposed herein is a system and method for measuring the liquid level in a container that employs an optic fiber sensor which is heated using a simple power source and a wire and making an anemometry measurement. The heater wire is cycled between two levels of heat and the liquid level is obtained by measuring the heat transfer characteristics of the surrounding environment.

  10. The 100-year flood seems to be changing. Can we really tell?

    NASA Astrophysics Data System (ADS)

    Ceres, R. L., Jr.; Forest, C. E.; Keller, K.

    2017-12-01

    Widespread flooding from Hurricane Harvey greatly exceeded the Federal Emergency Management Agency's 100-year flood levels. In the US, this flood level is often used as an important line of demarcation where areas above this level are considered safe, while areas below the line are at risk and require additional flood risk mitigation. In the wake of Harvey's damage, the US media has highlighted at least two important questions. First, has the 100-year flood level changed? Second, is the 100-year flood level a good metric for determining flood risk? To address the first question, we use an Observation System Simulation Experiment of storm surge flood levels and find that gradual changes to the 100-year storm surge level may not be reliably detected over the long lifespans expected of major flood risk mitigation strategies. Additionally, we find that common extreme value analysis models lead to biased results and additional uncertainty when incorrect assumptions are used for the underlying statistical model. These incorrect assumptions can lead to examples of negative learning. Addressing the second question, these findings further challenge the validity of using simple return levels such as the 100-year flood as a decision tool for assessing flood risk. These results indicate risk management strategies must account for such uncertainties to build resilient and robust planning tools that stakeholders desperately need.

  11. Identifying habitats at risk: simple models can reveal complex ecosystem dynamics.

    PubMed

    Maxwell, Paul S; Pitt, Kylie A; Olds, Andrew D; Rissik, David; Connolly, Rod M

    2015-03-01

    The relationship between ecological impact and ecosystem structure is often strongly nonlinear, so that small increases in impact levels can cause a disproportionately large response in ecosystem structure. Nonlinear ecosystem responses can be difficult to predict because locally relevant data sets can be difficult or impossible to obtain. Bayesian networks (BN) are an emerging tool that can help managers to define ecosystem relationships using a range of data types from comprehensive quantitative data sets to expert opinion. We show how a simple BN can reveal nonlinear dynamics in seagrass ecosystems using ecological relationships sourced from the literature. We first developed a conceptual diagram by cataloguing the ecological responses of seagrasses to a range of drivers and impacts. We used the conceptual diagram to develop a BN populated with values sourced from published studies. We then applied the BN to show that the amount of initial seagrass biomass has a mitigating effect on the level of impact a meadow can withstand without loss, and that meadow recovery can often require disproportionately large improvements in impact levels. This mitigating effect resulted in the middle ranges of impact levels having a wide likelihood of seagrass presence, a situation known as bistability. Finally, we applied the model in a case study to identify the risk of loss and the likelihood of recovery for the conservation and management of seagrass meadows in Moreton Bay, Queensland, Australia. We used the model to predict the likelihood of bistability in 23 locations in the Bay. The model predicted bistability in seven locations, most of which have experienced seagrass loss at some stage in the past 25 years providing essential information for potential future restoration efforts. Our results demonstrate the capacity of simple, flexible modeling tools to facilitate collation and synthesis of disparate information. This approach can be adopted in the initial stages of conservation programs as a low-cost and relatively straightforward way to provide preliminary assessments of.nonlinear dynamics in ecosystems.

  12. CALIBRATION OF EQUILIBRIUM TIDE THEORY FOR EXTRASOLAR PLANET SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Brad M. S., E-mail: hansen@astro.ucla.ed

    2010-11-01

    We provide an 'effective theory' of tidal dissipation in extrasolar planet systems by empirically calibrating a model for the equilibrium tide. The model is valid to high order in eccentricity and parameterized by two constants of bulk dissipation-one for dissipation in the planet and one for dissipation in the host star. We are able to consistently describe the distribution of extrasolar planetary systems in terms of period, eccentricity, and mass (with a lower limit of a Saturn mass) with this simple model. Our model is consistent with the survival of short-period exoplanet systems, but not with the circularization period ofmore » equal mass stellar binaries, suggesting that the latter systems experience a higher level of dissipation than exoplanet host stars. Our model is also not consistent with the explanation of inflated planetary radii as resulting from tidal dissipation. The paucity of short-period planets around evolved A stars is explained as the result of enhanced tidal inspiral resulting from the increase in stellar radius with evolution.« less

  13. 3D Printing of Plant Golgi Stacks from Their Electron Tomographic Models.

    PubMed

    Mai, Keith Ka Ki; Kang, Madison J; Kang, Byung-Ho

    2017-01-01

    Three-dimensional (3D) printing is an effective tool for preparing tangible 3D models from computer visualizations to assist in scientific research and education. With the recent popularization of 3D printing processes, it is now possible for individual laboratories to convert their scientific data into a physical form suitable for presentation or teaching purposes. Electron tomography is an electron microscopy method by which 3D structures of subcellular organelles or macromolecular complexes are determined at nanometer-level resolutions. Electron tomography analyses have revealed the convoluted membrane architectures of Golgi stacks, chloroplasts, and mitochondria. But the intricacy of their 3D organizations is difficult to grasp from tomographic models illustrated on computer screens. Despite the rapid development of 3D printing technologies, production of organelle models based on experimental data with 3D printing has rarely been documented. In this chapter, we present a simple guide to creating 3D prints of electron tomographic models of plant Golgi stacks using the two most accessible 3D printing technologies.

  14. Simple energy balance model resolving the seasons and the continents - Application to the astronomical theory of the ice ages

    NASA Technical Reports Server (NTRS)

    North, G. R.; Short, D. A.; Mengel, J. G.

    1983-01-01

    An analysis is undertaken of the properties of a one-level seasonal energy balance climate model having explicit, two-dimensional land-sea geography, where land and sea surfaces are strictly distinguished by the local thermal inertia employed and transport is governed by a smooth, latitude-dependent diffusion mechanism. Solutions of the seasonal cycle for the cases of both ice feedback exclusion and inclusion yield good agreements with real data, using minimal turning of the adjustable parameters. Discontinuous icecap growth is noted for both a solar constant that is lower by a few percent and a change of orbital elements to favor cool Northern Hemisphere summers. This discontinuous sensitivity is discussed in the context of the Milankovitch theory of the ice ages, and the associated branch structure is shown to be analogous to the 'small ice cap' instability of simpler models.

  15. What Is a Simple Liquid?

    NASA Astrophysics Data System (ADS)

    Ingebrigtsen, Trond S.; Schrøder, Thomas B.; Dyre, Jeppe C.

    2012-01-01

    This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials) and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS). This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r-n pair potentials with n=18,6,4), Lennard-Jones (LJ) models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture), the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be characterized in three quite different ways: (1) chemically by the fact that the liquid’s properties are fully determined by interactions from the molecules within the FCS, (2) physically by the fact that there are isomorphs in the phase diagram, i.e., curves along which several properties like excess entropy, structure, and dynamics, are invariant in reduced units, and (3) mathematically by the fact that throughout the phase diagram the reduced-coordinate constant-potential-energy hypersurfaces define a one-parameter family of compact Riemannian manifolds. No proof is given that the chemical characterization follows from the strong correlation property, but we show that this FCS characterization is consistent with the existence of isomorphs in strongly correlating liquids’ phase diagram. Finally, we note that the FCS characterization of simple liquids calls into question the physical basis of standard perturbation theory, according to which the repulsive and attractive forces play fundamentally different roles for the physics of liquids.

  16. Topics in Complexity: Dynamical Patterns in the Cyberworld

    NASA Astrophysics Data System (ADS)

    Qi, Hong

    Quantitative understanding of mechanism in complex systems is a common "difficult" problem across many fields such as physical, biological, social and economic sciences. Investigation on underlying dynamics of complex systems and building individual-based models have recently been fueled by big data resulted from advancing information technology. This thesis investigates complex systems in social science, focusing on civil unrests on streets and relevant activities online. Investigation consists of collecting data of unrests from open digital source, featuring dynamical patterns underlying, making predictions and constructing models. A simple law governing the progress of two-sided confrontations is proposed with data of activities at micro-level. Unraveling the connections between activity of organizing online and outburst of unrests on streets gives rise to a further meso-level pattern of human behavior, through which adversarial groups evolve online and hyper-escalate ahead of real-world uprisings. Based on the patterns found, noticeable improvement of prediction of civil unrests is achieved. Meanwhile, novel model created from combination of mobility dynamics in the cyberworld and a traditional contagion model can better capture the characteristics of modern civil unrests and other contagion-like phenomena than the original one.

  17. Identifying Factors that Influence State-Specific Hunger Rates in the U.S.: A Simple Analytic Method for Understanding a Persistent Problem

    ERIC Educational Resources Information Center

    Edwards, Mark Evan; Weber, Bruce; Bernell, Stephanie

    2007-01-01

    An existing measure of food insecurity with hunger in the United States may serve as an effective indicator of quality of life. State level differences in that measure can reveal important differences in quality of life across places. In this study, we advocate and demonstrate two simple methods by which analysts can explore state-specific…

  18. An assessment on convective and radiative heat transfer modelling in tubular solid oxide fuel cells

    NASA Astrophysics Data System (ADS)

    Sánchez, D.; Muñoz, A.; Sánchez, T.

    Four models of convective and radiative heat transfer inside tubular solid oxide fuel cells are presented in this paper, all of them applicable to multidimensional simulations. The work is aimed at assessing if it is necessary to use a very detailed and complicated model to simulate heat transfer inside this kind of device and, for those cases when simple models can be used, the errors are estimated and compared to those of the more complex models. For the convective heat transfer, two models are presented. One of them accounts for the variation of film coefficient as a function of local temperature and composition. This model gives a local value for the heat transfer coefficients and establishes the thermal entry length. The second model employs an average value of the transfer coefficient, which is applied to the whole length of the duct being studied. It is concluded that, unless there is a need to calculate local temperatures, a simple model can be used to evaluate the global performance of the cell with satisfactory accuracy. For the radiation heat transfer, two models are presented again. One of them considers radial radiation exclusively and, thus, radiative exchange between adjacent cells is neglected. On the other hand, the second model accounts for radiation in all directions but increases substantially the complexity of the problem. For this case, it is concluded that deviations between both models are higher than for convection. Actually, using a simple model can lead to a not negligible underestimation of the temperature of the cell.

  19. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  20. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  1. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  2. Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achtemeier, Gary, L.; Goodrick, Scott, A.; Liu, Yongqiang

    2011-08-19

    We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric environment, multiple-core updrafts, and detrainment of particulate matter. The number of empirical coefficients appearing in the model theory is reduced through a sensitivity analysis with the Fourier Amplitude Sensitivity Test (FAST). Daysmoke simulations for 'bent-over' plumes compare closely with Briggs theory although the two-thirds law is not explicit in Daysmoke. However, the solutions for themore » 'highly-tilted' plume characterized by weak buoyancy, low initial vertical velocity, and large initial plume diameter depart considerably from Briggs theory. Results from a study of weak plumes from prescribed burns at Fort Benning GA showed simulated ground-level PM2.5 comparing favorably with observations taken within the first eight kilometers of eleven prescribed burns. Daysmoke placed plume tops near the lower end of the range of observed plume tops for six prescribed burns. Daysmoke provides the levels and amounts of smoke injected into regional scale air quality models. Results from CMAQ with and without an adaptive grid are presented.« less

  3. Topological nodal superconducting phases and topological phase transition in the hyperhoneycomb lattice

    NASA Astrophysics Data System (ADS)

    Bouhon, Adrien; Schmidt, Johann; Black-Schaffer, Annica M.

    2018-03-01

    We establish the topology of the spin-singlet superconducting states in the bare hyperhoneycomb lattice, and we derive analytically the full phase diagram using only symmetry and topology in combination with simple energy arguments. The phase diagram is dominated by two states preserving time-reversal symmetry. We find a line-nodal state dominating at low doping levels that is topologically nontrivial and exhibits surface Majorana flatbands, which we show perfectly match the bulk-boundary correspondence using the Berry phase approach. At higher doping levels, we find a fully gapped state with trivial topology. By analytically calculating the topological invariant of the nodal lines, we derive the critical point between the line-nodal and fully gapped states as a function of both pairing parameters and doping. We find that the line-nodal state is favored not only at lower doping levels but also if symmetry-allowed deformations of the lattice are present. Adding simple energy arguments, we establish that a fully gapped state with broken time-reversal symmetry likely appears covering the actual phase transition. We find this fully gapped state to be topologically trivial, while we find an additional point-nodal state at very low doing levels that also break time-reversal symmetry and has nontrivial topology with associated Fermi surface arcs. We eventually address the robustness of the phase diagram to generalized models also including adiabatic spin-orbit coupling, and we show how all but the point-nodal state are reasonably stable.

  4. A Simplified Technique for Scoring DSM-IV Personality Disorders with the Five-Factor Model

    ERIC Educational Resources Information Center

    Miller, Joshua D.; Bagby, R. Michael; Pilkonis, Paul A.; Reynolds, Sarah K.; Lynam, Donald R.

    2005-01-01

    The current study compares the use of two alternative methodologies for using the Five-Factor Model (FFM) to assess personality disorders (PDs). Across two clinical samples, a technique using the simple sum of selected FFM facets is compared with a previously used prototype matching technique. The results demonstrate that the more easily…

  5. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  6. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  7. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  8. Hierarchical lattice models of hydrogen-bond networks in water

    NASA Astrophysics Data System (ADS)

    Dandekar, Rahul; Hassanali, Ali A.

    2018-06-01

    We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].

  9. An admissible level \\widehat{osp} ( 1 \\big \\vert 2 ) -model: modular transformations and the Verlinde formula

    NASA Astrophysics Data System (ADS)

    Snadden, John; Ridout, David; Wood, Simon

    2018-05-01

    The modular properties of the simple vertex operator superalgebra associated with the affine Kac-Moody superalgebra \\widehat{{osp}} (1|2) at level -5/4 are investigated. After classifying the relaxed highest-weight modules over this vertex operator superalgebra, the characters and supercharacters of the simple weight modules are computed and their modular transforms are determined. This leads to a complete list of the Grothendieck fusion rules by way of a continuous superalgebraic analog of the Verlinde formula. All Grothendieck fusion coefficients are observed to be non-negative integers. These results indicate that the extension to general admissible levels will follow using the same methodology once the classification of relaxed highest-weight modules is completed.

  10. Comparison of three GIS-based models for predicting rockfall runout zones at a regional scale

    NASA Astrophysics Data System (ADS)

    Dorren, Luuk K. A.; Seijmonsbergen, Arie C.

    2003-11-01

    Site-specific information about the level of protection that mountain forests provide is often not available for large regions. Information regarding rockfalls is especially scarce. The most efficient way to obtain information about rockfall activity and the efficacy of protection forests at a regional scale is to use a simulation model. At present, it is still unknown which forest parameters could be incorporated best in such models. Therefore, the purpose of this study was to test and evaluate a model for rockfall assessment at a regional scale in which simple forest stand parameters, such as the number of trees per hectare and the diameter at breast height, are incorporated. Therefore, a newly developed Geographical Information System (GIS)-based distributed model is compared with two existing rockfall models. The developed model is the only model that calculates the rockfall velocity on the basis of energy loss due to collisions with trees and on the soil surface. The two existing models calculate energy loss over the distance between two cell centres, while the newly developed model is able to calculate multiple bounces within a pixel. The patterns of rockfall runout zones produced by the three models are compared with patterns of rockfall deposits derived from geomorphological field maps. Furthermore, the rockfall velocities modelled by the three models are compared. It is found that the models produced rockfall runout zone maps with rather similar accuracies. However, the developed model performs best on forested hillslopes and it also produces velocities that match best with field estimates on both forested and nonforested hillslopes irrespective of the slope gradient.

  11. The apparent solubility of aluminum (III) in Hanford high-level waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, Jacob G.

    2012-12-01

    The solubility of aluminum in Hanford nuclear waste impacts on the process ability of the waste by a number of proposed treatment options. For many years, Hanford staff has anecdotally noted that aluminum appears to be considerably more soluble in Hanford waste than the simpler electrolyte solutions used as analogues. There has been minimal scientific study to confirm these anecdotal observations, however. The present study determines the apparent solubility product for gibbsite in 50 tank samples. The ratio of hydroxide to aluminum in the liquid phase for the samples is calculated and plotted as a function of total sodium molarity.more » Total sodium molarity is used as a surrogate for ionic strength, because the relative ratios of mono, di and trivalent anions are not available for all of the samples. These results were compared to the simple NaOH-NaAl(OH{sub 4})H{sub 2}O system, and the NaOH-NaAl(OH{sub 4})NaCl-H{sub 2}O system data retrieved from the literature. The results show that gibbsite is apparently more soluble in the samples than in the simple systems whenever the sodium molarity is greater than two. This apparent enhanced solubility cannot be explained solely by differences in ionic strength. The change in solubility with ionic strength in simple systems is small compared to the difference between aluminum solubility in Hanford waste and the simple systems. The reason for the apparent enhanced solubility is unknown, but could include. kinetic or thermodynamic factors that are not present in the simple electrolyte systems. Any kinetic explanation would have to explain why the samples are always supersaturated whenever the sodium molarity is above two. Real waste characterization data should not be used to validate thermodynamic solubility models until it can be confirmed that the apparent enhanced gibbsite solubility is a thermodynamic effect and not a kinetic effect.« less

  12. Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect

    NASA Astrophysics Data System (ADS)

    Wilson, Derrek J.; Gea-Banacloche, Julio

    2012-04-01

    We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.

  13. Exploiting temporal gradients of antibiotic concentration against the emergence of resistance

    NASA Astrophysics Data System (ADS)

    Bauer, Marianne; Ngampruetikorn, Vudtiwat; Frey, Erwin; Stephens, Greg

    A very simple model for antibiotic resistance - involving one normal and one more resistant species interacting indirectly through a carrying capacity - shows that the temporal variation of the antibiotic can affect the effect of the antibiotic. For a single antibiotic pulse, we find that for different minimal inhibitory concentrations of the two species an optimal pulse shape may exist, which increases the likelihood of bacterial extinction. For a long series of pulses, efficiency does not vary monotonically with the length of the gap between two individual pulses, but instead, the gap length can be optimised by exploiting the competition between the two species. Finally, a series of pulses is not always more efficient than a single pulse. Shorter pulses may be more efficient in an initial time window without risking population level resistance. We elucidate this behaviour with a phase diagram, and discuss the meaning of this work for current experiments. (equally contributing author).

  14. Analytical study of nano-scale logical operations

    NASA Astrophysics Data System (ADS)

    Patra, Moumita; Maiti, Santanu K.

    2018-07-01

    A complete analytical prescription is given to perform three basic (OR, AND, NOT) and two universal (NAND, NOR) logic gates at nano-scale level using simple tailor made geometries. Two different geometries, ring-like and chain-like, are taken into account where in each case the bridging conductor is coupled to a local atomic site through a dangling bond whose site energy can be controlled by means of external gate electrode. The main idea is that when injecting electron energy matches with site energy of local atomic site transmission probability drops exactly to zero, whereas the junction exhibits finite transmission for other energies. Utilizing this prescription we perform logical operations, and, we strongly believe that the proposed results can be verified in laboratory. Finally, we numerically compute two-terminal transmission probability considering general models and the numerical results match exactly well with our analytical findings.

  15. Predicting solar radiation based on available weather indicators

    NASA Astrophysics Data System (ADS)

    Sauer, Frank Joseph

    Solar radiation prediction models are complex and require software that is not available for the household investor. The processing power within a normal desktop or laptop computer is sufficient to calculate similar models. This barrier to entry for the average consumer can be fixed by a model simple enough to be calculated by hand if necessary. Solar radiation modeling has been historically difficult to predict and accurate models have significant assumptions and restrictions on their use. Previous methods have been limited to linear relationships, location restrictions, or input data limits to one atmospheric condition. This research takes a novel approach by combining two techniques within the computational limits of a household computer; Clustering and Hidden Markov Models (HMMs). Clustering helps limit the large observation space which restricts the use of HMMs. Instead of using continuous data, and requiring significantly increased computations, the cluster can be used as a qualitative descriptor of each observation. HMMs incorporate a level of uncertainty and take into account the indirect relationship between meteorological indicators and solar radiation. This reduces the complexity of the model enough to be simply understood and accessible to the average household investor. The solar radiation is considered to be an unobservable state that each household will be unable to measure. The high temperature and the sky coverage are already available through the local or preferred source of weather information. By using the next day's prediction for high temperature and sky coverage, the model groups the data and then predicts the most likely range of radiation. This model uses simple techniques and calculations to give a broad estimate for the solar radiation when no other universal model exists for the average household.

  16. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  17. Application of a Simple Model to Predict Environmental Radionuclide Levels and Consequential Dose Rates on the South Welsh Coast, U.K.

    NASA Astrophysics Data System (ADS)

    Halliwell, C. M.; McKay, W. A.

    1994-02-01

    The impact of liquid effluent discharges, from both existing nuclear power stations and from a possible future pressurized water reactor (PWR), on the levels of radioactivity in Welsh Severn coastal waters has been addressed in this study through the use of a simple box model. If a PWR was in operation at Hinkley Point, and assuming that the existing discharges into the estuary remained the same as in 1989, the levels of the most radiologically significant radionuclide, 137Cs, in seawater along the Welsh shoreline are predicted to increase by 7% (inner estuary), 7% (Welsh outer estuary) and 5% (inner channel) and in sediment by 0·3, 1·3 and 2% respectively. The radiation dose rate from 137Cs to members of the coastal population alone would show only a marginal increase due to these changes, and would remain less than 1% of the internationally recognized limit.

  18. A three-layer model of natural image statistics.

    PubMed

    Gutmann, Michael U; Hyvärinen, Aapo

    2013-11-01

    An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Experimental and numerical analysis of pre-compressed masonry walls in two-way-bending with second order effects

    NASA Astrophysics Data System (ADS)

    Milani, Gabriele; Olivito, Renato S.; Tralli, Antonio

    2014-10-01

    The buckling behavior of slender unreinforced masonry (URM) walls subjected to axial compression and out-of-plane lateral loads is investigated through a combined experimental and numerical homogenizedapproach. After a preliminary analysis performed on a unit cell meshed by means of elastic FEs and non-linear interfaces, macroscopic moment-curvature diagrams so obtained are implemented at a structural level, discretizing masonry by means of rigid triangular elements and non-linear interfaces. The non-linear incremental response of the structure is accounted for a specific quadratic programming routine. In parallel, a wide experimental campaign is conducted on walls in two way bending, with the double aim of both validating the numerical model and investigating the behavior of walls that may not be reduced to simple cantilevers or simply supported beams. Panels investigated are dry-joint in scale square walls simply supported at the base and on a vertical edge, exhibiting the classical Rondelet's mechanism. The results obtained are compared with those provided by the numerical model.

  20. MAPPING ANNUAL MEAN GROUND-LEVEL PM2.5 CONCENTRATIONS USING MULTIANGLE IMAGING SPECTRORADIOMETER AEROSOL OPTICAL THICKNESS OVER THE CONTIGUOUS UNITED STATES

    EPA Science Inventory

    We present a simple approach to estimating ground-level fine particle (PM2.5, particles smaller than 2.5 um in diameter) concentration using global atmospheric chemistry models and aerosol optical thickness (AOT) measurements from the Multi- angle Imaging SpectroRadiometer (MISR)...

  1. An Urban Diffusion Simulation Model for Carbon Monoxide

    ERIC Educational Resources Information Center

    Johnson, W. B.; And Others

    1973-01-01

    A relatively simple Gaussian-type diffusion simulation model for calculating urban carbon (CO) concentrations as a function of local meteorology and the distribution of traffic is described. The model can be used in two ways: in the synoptic mode and in the climatological mode. (Author/BL)

  2. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark

    The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminatemore » background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Furthermore, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.« less

  4. Liquid part of the phase diagram and percolation line for two-dimensional Mercedes-Benz water.

    PubMed

    Urbic, T

    2017-09-01

    Monte Carlo simulations and Wertheim's thermodynamic perturbation theory (TPT) are used to predict the phase diagram and percolation curve for the simple two-dimensional Mercedes-Benz (MB) model of water. The MB model of water is quite popular for explaining water properties, but the phase diagram has not been reported till now. In the MB model, water molecules are modeled as two-dimensional Lennard-Jones disks, with three orientation-dependent hydrogen-bonding arms, arranged as in the MB logo. The liquid part of the phase space is explored using grand canonical Monte Carlo simulations and two versions of Wertheim's TPT for associative fluids, which have been used before to predict the properties of the simple MB model. We find that the theory reproduces well the physical properties of hot water but is less successful at capturing the more structured hydrogen bonding that occurs in cold water. In addition to reporting the phase diagram and percolation curve of the model, it is shown that the improved TPT predicts the phase diagram rather well, while the standard one predicts a phase transition at lower temperatures. For the percolation line, both versions have problems predicting the correct position of the line at high temperatures.

  5. Liquid part of the phase diagram and percolation line for two-dimensional Mercedes-Benz water

    NASA Astrophysics Data System (ADS)

    Urbic, T.

    2017-09-01

    Monte Carlo simulations and Wertheim's thermodynamic perturbation theory (TPT) are used to predict the phase diagram and percolation curve for the simple two-dimensional Mercedes-Benz (MB) model of water. The MB model of water is quite popular for explaining water properties, but the phase diagram has not been reported till now. In the MB model, water molecules are modeled as two-dimensional Lennard-Jones disks, with three orientation-dependent hydrogen-bonding arms, arranged as in the MB logo. The liquid part of the phase space is explored using grand canonical Monte Carlo simulations and two versions of Wertheim's TPT for associative fluids, which have been used before to predict the properties of the simple MB model. We find that the theory reproduces well the physical properties of hot water but is less successful at capturing the more structured hydrogen bonding that occurs in cold water. In addition to reporting the phase diagram and percolation curve of the model, it is shown that the improved TPT predicts the phase diagram rather well, while the standard one predicts a phase transition at lower temperatures. For the percolation line, both versions have problems predicting the correct position of the line at high temperatures.

  6. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    PubMed

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Understanding the complex dynamics of stock markets through cellular automata

    NASA Astrophysics Data System (ADS)

    Qiu, G.; Kandhai, D.; Sloot, P. M. A.

    2007-04-01

    We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.

  8. A new simple six-step model to promote recruitment to RCTs was developed and successfully implemented.

    PubMed

    Realpe, Alba; Adams, Ann; Wall, Peter; Griffin, Damian; Donovan, Jenny L

    2016-08-01

    How a randomized controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part or not. Ninety-two consultations with 60 new patients were recorded and analyzed during a pilot RCT comparing surgical and nonsurgical interventions for hip impingement. Recordings were analyzed using techniques of thematic analysis and focused conversation analysis. Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are as follows: (1) explain the condition, (2) reassure patients about receiving treatment, (3) establish uncertainty, (4) explain the study purpose, (5) give a balanced view of treatments, and (6) Explain study procedures. There are also two elements throughout the consultation: (1) responding to patients' concerns and (2) showing confidence. The pilot study was successful, with 70% (n = 60) of patients approached across nine centers agreeing to take part in the RCT, so that the full-scale trial was funded. The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    PubMed

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  10. Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model

    NASA Technical Reports Server (NTRS)

    Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.

    2002-01-01

    A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.

  11. Laminar flamelet modeling of turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Mell, W. E.; Kosaly, G.; Planche, O.; Poinsot, T.; Ferziger, J. H.

    1990-01-01

    In modeling turbulent combustion, decoupling the chemistry from the turbulence is of great practical significance. In cases in which the equilibrium chemistry model breaks down, laminar flamelet modeling (LFM) is a promising approach to decoupling. Here, the validity of this approach is investigated using direct numerical simulation of a simple chemical reaction in two-dimensional turbulence.

  12. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  13. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  14. A simple model of the effect of ocean ventilation on ocean heat uptake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadiga, Balasubramanya T.; Urban, Nathan Mark

    Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less

  15. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  16. Cellulose I crystallinity determination using FT-Raman spectroscopy : univariate and multivariate methods

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2010-01-01

    Two new methods based on FT–Raman spectroscopy, one simple, based on band intensity ratio, and the other using a partial least squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in cellulose I samples was determined based on univariate regression that was first developed using the Raman band...

  17. Decompression sickness in breath-hold diving, and its probable connection to the growth and dissolution of small arterial gas emboli.

    PubMed

    Goldman, Saul; Solano-Altamirano, J M

    2015-04-01

    We solved the Laplace equation for the radius of an arterial gas embolism (AGE), during and after breath-hold diving. We used a simple three-region diffusion model for the AGE, and applied our results to two types of breath-hold dives: single, very deep competitive-level dives and repetitive shallower breath-hold dives similar to those carried out by indigenous commercial pearl divers in the South Pacific. Because of the effect of surface tension, AGEs tend to dissolve in arterial blood when arteries remote from supersaturated tissue. However if, before fully dissolving, they reach the capillary beds that perfuse the brain and the inner ear, they may become inflated with inert gas that is transferred into them from these contiguous temporarily supersaturated tissues. By using simple kinetic models of cerebral and inner ear tissue, the nitrogen tissue partial pressures during and after the dive(s) were determined. These were used to theoretically calculate AGE growth and dissolution curves for AGEs lodged in capillaries of the brain and inner ear. From these curves it was found that both cerebral and inner ear decompression sickness are expected to occur occasionally in single competitive-level dives. It was also determined from these curves that for the commercial repetitive dives considered, the duration of the surface interval (the time interval separating individual repetitive dives from one another) was a key determinant, as to whether inner ear and/or cerebral decompression sickness arose. Our predictions both for single competitive-level and repetitive commercial breath-hold diving were consistent with what is known about the incidence of cerebral and inner ear decompression sickness in these forms of diving. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System

    NASA Astrophysics Data System (ADS)

    Vincent, Alan

    1996-10-01

    All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.

  19. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  20. A simple rule of thumb for elegant prehension.

    PubMed

    Mon-Williams, M; Tresilian, J R

    2001-07-10

    Reaching out to grasp an object (prehension) is a deceptively elegant and skilled behavior. The movement prior to object contact can be described as having two components, the movement of the hand to an appropriate location for gripping the object, the "transport" component, and the opening and closing of the aperture between the fingers as they prepare to grip the target, the "grasp" component. The grasp component is sensitive to the size of the object, so that a larger grasp aperture is formed for wider objects; the maximum grasp aperture (MGA) is a little wider than the width of the target object and occurs later in the movement for larger objects. We present a simple model that can account for the temporal relationship between the transport and grasp components. We report the results of an experiment providing empirical support for our "rule of thumb." The model provides a simple, but plausible, account of a neural control strategy that has been the center of debate over the last two decades.

Top