Sample records for simple model based

  1. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models.

    PubMed

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-05-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.

  2. A simple geometrical model describing shapes of soap films suspended on two rings

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.

    2016-09-01

    We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.

  3. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  5. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  6. Learning in Structured Connectionist Networks

    DTIC Science & Technology

    1988-04-01

    the structure is too rigid and learning too difficult for cognitive modeling. Two algorithms for learning simple, feature-based concept descriptions...and learning too difficult for cognitive model- ing. Two algorithms for learning simple, feature-based concept descriptions were also implemented. The...Term Goals Recent progress in connectionist research has been encouraging; networks have success- fully modeled human performance for various cognitive

  7. A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS

    EPA Science Inventory

    We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...

  8. Simple and Hierarchical Models for Stochastic Test Misgrading.

    ERIC Educational Resources Information Center

    Wang, Jianjun

    1993-01-01

    Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)

  9. Simulation of green roof runoff under different substrate depths and vegetation covers by coupling a simple conceptual and a physically based hydrological model.

    PubMed

    Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A

    2017-09-15

    In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Supply based on demand dynamical model

    NASA Astrophysics Data System (ADS)

    Levi, Asaf; Sabuco, Juan; Sanjuán, Miguel A. F.

    2018-04-01

    We propose and numerically analyze a simple dynamical model that describes the firm behaviors under uncertainty of demand. Iterating this simple model and varying some parameter values, we observe a wide variety of market dynamics such as equilibria, periodic, and chaotic behaviors. Interestingly, the model is also able to reproduce market collapses.

  11. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  12. Comparison of ACCENT 2000 Shuttle Plume Data with SIMPLE Model Predictions

    NASA Astrophysics Data System (ADS)

    Swaminathan, P. K.; Taylor, J. C.; Ross, M. N.; Zittel, P. F.; Lloyd, S. A.

    2001-12-01

    The JHU/APL Stratospheric IMpact of PLume Effluents (SIMPLE)model was employed to analyze the trace species in situ composition data collected during the ACCENT 2000 intercepts of the space shuttle Space Transportation Launch System (STS) rocket plume as a function of time and radial location within the cold plume. The SIMPLE model is initialized using predictions for species depositions calculated using an afterburning model based on standard TDK/SPP nozzle and SPF plume flowfield codes with an expanded chemical kinetic scheme. The time dependent ambient stratospheric chemistry is fully coupled to the plume species evolution whose transport is based on empirically derived diffusion. Model/data comparisons are encouraging through capturing observed local ozone recovery times as well as overall morphology of chlorine chemistry.

  13. A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.

    2009-01-01

    This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.

  14. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Treesearch

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  15. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  16. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  17. The Monash University Interactive Simple Climate Model

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2013-12-01

    The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.

  18. Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C

    System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less

  19. Prescription of land-surface boundary conditions in GISS GCM 2: A simple method based on high-resolution vegetation data bases

    NASA Technical Reports Server (NTRS)

    Matthews, E.

    1984-01-01

    A simple method was developed for improved prescription of seasonal surface characteristics and parameterization of land-surface processes in climate models. This method, developed for the Goddard Institute for Space Studies General Circulation Model II (GISS GCM II), maintains the spatial variability of fine-resolution land-cover data while restricting to 8 the number of vegetation types handled in the model. This was achieved by: redefining the large number of vegetation classes in the 1 deg x 1 deg resolution Matthews (1983) vegetation data base as percentages of 8 simple types; deriving roughness length, field capacity, masking depth and seasonal, spectral reflectivity for the 8 types; and aggregating these surface features from the 1 deg x 1 deg resolution to coarser model resolutions, e.g., 8 deg latitude x 10 deg longitude or 4 deg latitude x 5 deg longitude.

  20. Predicting Hail Size Using Model Vertical Velocities

    DTIC Science & Technology

    2008-03-01

    updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...updrafts from a simple cloud model using forecasted soundings . The models used MM5 model data coinciding with severe hail events collected from the...determine their accuracy. Plus they are based primary on observed upper air soundings . Obtaining upper air soundings in proximity to convective

  1. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  2. Agent Model Development for Assessing Climate-Induced Geopolitical Instability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boslough, Mark B.; Backus, George A.

    2005-12-01

    We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less

  3. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  4. Memory-Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models

    ERIC Educational Resources Information Center

    Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro

    2017-01-01

    Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…

  5. Combinatorial structures to modeling simple games and applications

    NASA Astrophysics Data System (ADS)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  6. Masquerade Detection Using a Taxonomy-Based Multinomial Modeling Approach in UNIX Systems

    DTIC Science & Technology

    2008-08-25

    primarily the modeling of statistical features , such as the frequency of events, the duration of events, the co- occurrence of multiple events...are identified, we can extract features representing such behavior while auditing the user’s behavior. Figure1: Taxonomy of Linux and Unix...achieved when the features are extracted just from simple commands. Method Hit Rate False Positive Rate ocSVM using simple cmds (freq.-based

  7. Grass Grows, the Cow Eats: A Simple Grazing Systems Model with Emergent Properties

    ERIC Educational Resources Information Center

    Ungar, Eugene David; Seligman, Noam G.; Noy-Meir, Imanuel

    2004-01-01

    We describe a simple, yet intellectually challenging model of grazing systems that introduces basic concepts in ecology and systems analysis. The practical is suitable for high-school and university curricula with a quantitative orientation, and requires only basic skills in mathematics and spreadsheet use. The model is based on Noy-Meir's (1975)…

  8. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  9. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  10. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  11. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  12. Accounting For Gains And Orientations In Polarimetric SAR

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    1992-01-01

    Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.

  13. Determination of cellulose I crystallinity by FT-Raman spectroscopy

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2009-01-01

    Two new methods based on FT-Raman spectroscopy, one simple, based on band intensity ratio, and the other, using a partial least-squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in semicrystalline cellulose I samples was determined based on univariate regression that was first developed using the...

  14. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  15. Simulation Study on Fit Indexes in CFA Based on Data with Slightly Distorted Simple Structure

    ERIC Educational Resources Information Center

    Beauducel, Andre; Wittmann, Werner W.

    2005-01-01

    Fit indexes were compared with respect to a specific type of model misspecification. Simple structure was violated with some secondary loadings that were present in the true models that were not specified in the estimated models. The c2 test, Comparative Fit Index, Goodness-of-Fit Index, Incremental Fit Index, Nonnormed Fit Index, root mean…

  16. A simple physical model for forest fire spread

    Treesearch

    E. Koo; P. Pagni; J. Woycheese; S. Stephens; D. Weise; J. Huff

    2005-01-01

    Based on energy conservation and detailed heat transfer mechanisms, a simple physical model for fire spread is presented for the limit of one-dimensional steady-state contiguous spread of a line fire in a thermally-thin uniform porous fuel bed. The solution for the fire spread rate is found as an eigenvalue from this model with appropriate boundary conditions through a...

  17. Exploratory reconstructability analysis of accident TBI data

    NASA Astrophysics Data System (ADS)

    Zwick, Martin; Carney, Nancy; Nettleton, Rosemary

    2018-02-01

    This paper describes the use of reconstructability analysis to perform a secondary study of traumatic brain injury data from automobile accidents. Neutral searches were done and their results displayed with a hypergraph. Directed searches, using both variable-based and state-based models, were applied to predict performance on two cognitive tests and one neurological test. Very simple state-based models gave large uncertainty reductions for all three DVs and sizeable improvements in percent correct for the two cognitive test DVs which were equally sampled. Conditional probability distributions for these models are easily visualized with simple decision trees. Confounding variables and counter-intuitive findings are also reported.

  18. Dosimetry in x-ray-based breast imaging

    PubMed Central

    Dance, David R; Sechopoulos, Ioannis

    2016-01-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable. PMID:27617767

  19. Dosimetry in x-ray-based breast imaging

    NASA Astrophysics Data System (ADS)

    Dance, David R.; Sechopoulos, Ioannis

    2016-10-01

    The estimation of the mean glandular dose to the breast (MGD) for x-ray based imaging modalities forms an essential part of quality control and is needed for risk estimation and for system design and optimisation. This review considers the development of methods for estimating the MGD for mammography, digital breast tomosynthesis (DBT) and dedicated breast CT (DBCT). Almost all of the methodology used employs Monte Carlo calculated conversion factors to relate the measurable quantity, generally the incident air kerma, to the MGD. After a review of the size and composition of the female breast, the various mathematical models used are discussed, with particular emphasis on models for mammography. These range from simple geometrical shapes, to the more recent complex models based on patient DBCT examinations. The possibility of patient-specific dose estimates is considered as well as special diagnostic views and the effect of breast implants. Calculations using the complex models show that the MGD for mammography is overestimated by about 30% when the simple models are used. The design and uses of breast-simulating test phantoms for measuring incident air kerma are outlined and comparisons made between patient and phantom-based dose estimates. The most widely used national and international dosimetry protocols for mammography are based on different simple geometrical models of the breast, and harmonisation of these protocols using more complex breast models is desirable.

  20. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  1. Predicting Mercury's precession using simple relativistic Newtonian dynamics

    NASA Astrophysics Data System (ADS)

    Friedman, Y.; Steiner, J. M.

    2016-03-01

    We present a new simple relativistic model for planetary motion describing accurately the anomalous precession of the perihelion of Mercury and its origin. The model is based on transforming Newton's classical equation for planetary motion from absolute to real spacetime influenced by the gravitational potential and introducing the concept of influenced direction.

  2. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  3. Cellulose I crystallinity determination using FT-Raman spectroscopy : univariate and multivariate methods

    Treesearch

    Umesh P. Agarwal; Richard S. Reiner; Sally A. Ralph

    2010-01-01

    Two new methods based on FT–Raman spectroscopy, one simple, based on band intensity ratio, and the other using a partial least squares (PLS) regression model, are proposed to determine cellulose I crystallinity. In the simple method, crystallinity in cellulose I samples was determined based on univariate regression that was first developed using the Raman band...

  4. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  5. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  6. A simple optical model to estimate suspended particulate matter in Yellow River Estuary.

    PubMed

    Qiu, Zhongfeng

    2013-11-18

    Distribution of the suspended particulate matter (SPM) concentration is a key issue for analyzing the deposition and erosion variety of the estuary and evaluating the material fluxes from river to sea. Satellite remote sensing is a useful tool to investigate the spatial variation of SPM concentration in estuarial zones. However, algorithm developments and validations of the SPM concentrations in Yellow River Estuary (YRE) have been seldom performed before and therefore our knowledge on the quality of retrieval of SPM concentration is poor. In this study, we developed a new simple optical model to estimate SPM concentration in YRE by specifying the optimal wavelength ratios (600-710 nm)/ (530-590 nm) based on observations of 5 cruises during 2004 and 2011. The simple optical model was attentively calibrated and the optimal band ratios were selected for application to multiple sensors, 678/551 for the Moderate Resolution Imaging Spectroradiometer (MODIS), 705/560 for the Medium Resolution Imaging Spectrometer (MERIS) and 680/555 for the Geostationary Ocean Color Imager (GOCI). With the simple optical model, the relative percentage difference and the mean absolute error were 35.4% and 15.6 gm(-3) respectively for MODIS, 42.2% and 16.3 gm(-3) for MERIS, and 34.2% and 14.7 gm(-3) for GOCI, based on an independent validation data set. Our results showed a good precision of estimation for SPM concentration using the new simple optical model, contrasting with the poor estimations derived from existing empirical models. Providing an available atmospheric correction scheme for satellite imagery, our simple model could be used for quantitative monitoring of SPM concentrations in YRE.

  7. Application of Support Vector Machine to Forex Monitoring

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Joarder; Sarker, Ruhul A.

    Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.

  8. Chemical Kinetics, Heat Transfer, and Sensor Dynamics Revisited in a Simple Experiment

    ERIC Educational Resources Information Center

    Sad, Maria E.; Sad, Mario R.; Castro, Alberto A.; Garetto, Teresita F.

    2008-01-01

    A simple experiment about thermal effects in chemical reactors is described, which can be used to illustrate chemical reactor models, the determination and validation of their parameters, and some simple principles of heat transfer and sensor dynamics. It is based in the exothermic reaction between aqueous solutions of sodium thiosulfate and…

  9. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  10. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  11. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  12. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  13. A simple 2D biofilm model yields a variety of morphological features.

    PubMed

    Hermanowicz, S W

    2001-01-01

    A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.

  14. A Simple Sensor Model for THUNDER Actuators

    NASA Technical Reports Server (NTRS)

    Campbell, Joel F.; Bryant, Robert G.

    2009-01-01

    A quasi-static (low frequency) model is developed for THUNDER actuators configured as displacement sensors based on a simple Raleigh-Ritz technique. This model is used to calculate charge as a function of displacement. Using this and the calculated capacitance, voltage vs. displacement and voltage vs. electrical load curves are generated and compared with measurements. It is shown this model gives acceptable results and is useful for determining rough estimates of sensor output for various loads, laminate configurations and thicknesses.

  15. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  16. Simple Process-Based Simulators for Generating Spatial Patterns of Habitat Loss and Fragmentation: A Review and Introduction to the G-RaFFe Model

    PubMed Central

    Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108

  17. Simple process-based simulators for generating spatial patterns of habitat loss and fragmentation: a review and introduction to the G-RaFFe model.

    PubMed

    Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro

    2013-01-01

    Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.

  18. Architecture with GIDEON, A Program for Design in Structural DNA Nanotechnology

    PubMed Central

    Birac, Jeffrey J.; Sherman, William B.; Kopatsch, Jens; Constantinou, Pamela E.; Seeman, Nadrian C.

    2012-01-01

    We present geometry based design strategies for DNA nanostructures. The strategies have been implemented with GIDEON – a Graphical Integrated Development Environment for OligoNucleotides. GIDEON has a highly flexible graphical user interface that facilitates the development of simple yet precise models, and the evaluation of strains therein. Models are built on a simple model of undistorted B-DNA double-helical domains. Simple point and click manipulations of the model allow the minimization of strain in the phosphate-backbone linkages between these domains and the identification of any steric clashes that might occur as a result. Detailed analysis of 3D triangles yields clear predictions of the strains associated with triangles of different sizes. We have carried out experiments that confirm that 3D triangles form well only when their geometrical strain is less than 4% deviation from the estimated relaxed structure. Thus geometry-based techniques alone, without energetic considerations, can be used to explain general trends in DNA structure formation. We have used GIDEON to build detailed models of double crossover and triple crossover molecules, evaluating the non-planarity associated with base tilt and junction mis-alignments. Computer modeling using a graphical user interface overcomes the limited precision of physical models for larger systems, and the limited interaction rate associated with earlier, command-line driven software. PMID:16630733

  19. A simple model for indentation creep

    NASA Astrophysics Data System (ADS)

    Ginder, Ryan S.; Nix, William D.; Pharr, George M.

    2018-03-01

    A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.

  20. Cooling tower plume - model and experiment

    NASA Astrophysics Data System (ADS)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  1. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  2. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  3. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  4. A simple and accurate rule-based modeling framework for simulation of autocrine/paracrine stimulation of glioblastoma cell motility and proliferation by L1CAM in 2-D culture.

    PubMed

    Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S

    2017-12-11

    Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other similar intrinsic or extrinsic stimuli that influence cancer or other cell behavior. This modeling framework of a commonly used experimental motility assay (scratch assay) should be useful to both researchers of cell motility and students in a cell biology teaching laboratory.

  5. An integrated Gaussian process regression for prediction of remaining useful life of slow speed bearings based on acoustic emission

    NASA Astrophysics Data System (ADS)

    Aye, S. A.; Heyns, P. S.

    2017-02-01

    This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.

  6. Evaluation of a simple, point-scale hydrologic model in simulating soil moisture using the Delaware environmental observing system

    NASA Astrophysics Data System (ADS)

    Legates, David R.; Junghenn, Katherine T.

    2018-04-01

    Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.

  7. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  8. Dark matter and MOND dynamical models of the massive spiral galaxy NGC 2841

    NASA Astrophysics Data System (ADS)

    Samurović, S.; Vudragović, A.; Jovanović, M.

    2015-08-01

    We study dynamical models of the massive spiral galaxy NGC 2841 using both the Newtonian models with Navarro-Frenk-White (NFW) and isothermal dark haloes, as well as various MOND (MOdified Newtonian Dynamics) models. We use the observations coming from several publicly available data bases: we use radio data, near-infrared photometry as well as spectroscopic observations. In our models, we find that both tested Newtonian dark matter approaches can successfully fit the observed rotational curve of NGC 2841. The three tested MOND models (standard, simple and, for the first time applied to another spiral galaxy than the Milky Way, Bekenstein's toy model) provide fits of the observed rotational curve with various degrees of success: the best result was obtained with the standard MOND model. For both approaches, Newtonian and MOND, the values of the mass-to-light ratios of the bulge are consistent with the predictions from the stellar population synthesis (SPS) based on the Salpeter initial mass function (IMF). Also, for Newtonian and simple and standard MOND models, the estimated stellar mass-to-light ratios of the disc agree with the predictions from the SPS models based on the Kroupa IMF, whereas the toy MOND model provides too low a value of the stellar mass-to-light ratio, incompatible with the predictions of the tested SPS models. In all our MOND models, we vary the distance to NGC 2841, and our best-fitting standard and toy models use the values higher than the Cepheid-based distance to the galaxy NGC 2841, and the best-fitting simple MOND model is based on the lower value of the distance. The best-fitting NFW model is inconsistent with the predictions of the Λ cold dark matter cosmology, because the inferred concentration index is too high for the established virial mass.

  9. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  10. A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow

    NASA Technical Reports Server (NTRS)

    Oseguera, Rosa M.; Bowles, Roland L.

    1988-01-01

    A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.

  11. Geometrical correlations in the nucleosomal DNA conformation and the role of the covalent bonds rigidity

    PubMed Central

    Ghorbani, Maryam; Mohammad-Rafiee, Farshid

    2011-01-01

    We develop a simple elastic model to study the conformation of DNA in the nucleosome core particle. In this model, the changes in the energy of the covalent bonds that connect the base pairs of each strand of the DNA double helix, as well as the lateral displacements and the rotation of adjacent base pairs are considered. We show that because of the rigidity of the covalent bonds in the sugar-phosphate backbones, the base pair parameters are highly correlated, especially, strong twist-roll-slide correlation in the conformation of the nucleosomal DNA is vividly observed in the calculated results. This simple model succeeds to account for the detailed features of the structure of the nucleosomal DNA, particularly, its more important base pair parameters, roll and slide, in good agreement with the experimental results. PMID:20972223

  12. Manipulators with flexible links: A simple model and experiments

    NASA Technical Reports Server (NTRS)

    Shimoyama, Isao; Oppenheim, Irving J.

    1989-01-01

    A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.

  13. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  14. A simple marriage model for the power-law behaviour in the frequency distributions of family names

    NASA Astrophysics Data System (ADS)

    Wu, Hao-Yun; Chou, Chung-I.; Tseng, Jie-Jun

    2011-01-01

    In many countries, the frequency distributions of family names are found to decay as a power law with an exponent ranging from 1.0 to 2.2. In this work, we propose a simple marriage model which can reproduce this power-law behaviour. Our model, based on the evolution of families, consists of the growth of big families and the formation of new families. Preliminary results from the model show that the name distributions are in good agreement with empirical data from Taiwan and Norway.

  15. Simple Electrolyzer Model Development for High-Temperature Electrolysis System Analysis Using Solid Oxide Electrolysis Cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JaeHwa Koh; DuckJoo Yoon; Chang H. Oh

    2010-07-01

    An electrolyzer model for the analysis of a hydrogen-production system using a solid oxide electrolysis cell (SOEC) has been developed, and the effects for principal parameters have been estimated by sensitivity studies based on the developed model. The main parameters considered are current density, area specific resistance, temperature, pressure, and molar fraction and flow rates in the inlet and outlet. Finally, a simple model for a high-temperature hydrogen-production system using the solid oxide electrolysis cell integrated with very high temperature reactors is estimated.

  16. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  17. Comparison of human driver dynamics in simulators with complex and simple visual displays and in an automobile on the road

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Klein, R. H.

    1975-01-01

    As part of a comprehensive program exploring driver/vehicle system response in lateral steering tasks, driver/vehicle system describing functions and other dynamic data have been gathered in several milieu. These include a simple fixed base simulator with an elementary roadway delineation only display; a fixed base statically operating automobile with a terrain model based, wide angle projection system display; and a full scale moving base automobile operating on the road. Dynamic data with the two fixed base simulators compared favorably, implying that the impoverished visual scene, lack of engine noise, and simplified steering wheel feel characteristics in the simple simulator did not induce significant driver dynamic behavior variations. The fixed base vs. moving base comparisons showed substantially greater crossover frequencies and phase margins on the road course.

  18. How effective is advertising in duopoly markets?

    NASA Astrophysics Data System (ADS)

    Sznajd-Weron, K.; Weron, R.

    2003-06-01

    A simple Ising spin model which can describe the mechanism of advertising in a duopoly market is proposed. In contrast to other agent-based models, the influence does not flow inward from the surrounding neighbors to the center site, but spreads outward from the center to the neighbors. The model thus describes the spread of opinions among customers. It is shown via standard Monte Carlo simulations that very simple rules and inclusion of an external field-an advertising campaign-lead to phase transitions.

  19. Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model

    NASA Astrophysics Data System (ADS)

    Wilton, Darren C.

    The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).

  20. A case-mix classification system for explaining healthcare costs using administrative data in Italy.

    PubMed

    Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico

    2018-03-04

    The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.

  1. Modeling method of time sequence model based grey system theory and application proceedings

    NASA Astrophysics Data System (ADS)

    Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang

    2015-12-01

    This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.

  2. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  3. Rainfall runoff modelling of the Upper Ganga and Brahmaputra basins using PERSiST.

    PubMed

    Futter, M N; Whitehead, P G; Sarkar, S; Rodda, H; Crossman, J

    2015-06-01

    There are ongoing discussions about the appropriate level of complexity and sources of uncertainty in rainfall runoff models. Simulations for operational hydrology, flood forecasting or nutrient transport all warrant different levels of complexity in the modelling approach. More complex model structures are appropriate for simulations of land-cover dependent nutrient transport while more parsimonious model structures may be adequate for runoff simulation. The appropriate level of complexity is also dependent on data availability. Here, we use PERSiST; a simple, semi-distributed dynamic rainfall-runoff modelling toolkit to simulate flows in the Upper Ganges and Brahmaputra rivers. We present two sets of simulations driven by single time series of daily precipitation and temperature using simple (A) and complex (B) model structures based on uniform and hydrochemically relevant land covers respectively. Models were compared based on ensembles of Bayesian Information Criterion (BIC) statistics. Equifinality was observed for parameters but not for model structures. Model performance was better for the more complex (B) structural representations than for parsimonious model structures. The results show that structural uncertainty is more important than parameter uncertainty. The ensembles of BIC statistics suggested that neither structural representation was preferable in a statistical sense. Simulations presented here confirm that relatively simple models with limited data requirements can be used to credibly simulate flows and water balance components needed for nutrient flux modelling in large, data-poor basins.

  4. Predicting the response of seven Asian glaciers to future climate scenarios using a simple linear glacier model

    NASA Astrophysics Data System (ADS)

    Ren, Diandong; Karoly, David J.

    2008-03-01

    Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.

  5. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

  6. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  7. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  8. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  9. Can we estimate the cellular phone RF peak output power with a simple experiment?

    NASA Astrophysics Data System (ADS)

    Fioreze, Maycon; dos Santos Junior, Sauli; Goncalves Hönnicke, Marcelo

    2016-07-01

    Cellular phones are becoming increasingly useful tools for students. Since cell phones operate in the microwave bandwidth, they can be used to motivate students to demonstrate and better understand the properties of electromagnetic waves. However, since these waves operate at higher frequencies (L-band, from 800 MHz to 2 GHz) it is not simple to detect them. Usually, expensive real-time high frequency oscilloscopes are required. Indirect measurements are also possible through heat-based and diode-detector-based radio-frequency (RF) power sensors. Another didactic and intuitive way is to explore a simple and inexpensive detection system, based on the interference effect caused in the electronic circuit of TV and PC soundspeakers, and to try to investigate different properties of the cell phones’ RF electromagnetic waves, such as its power and modulated frequency. This manuscript proposes a trial to quantify these measurements, based on a simple Friis equation model and the time constant of the circuit used in the detection system, in order to show it didactically to the students and even allow them also to explore such a simple detection system at home.

  10. Expected Shannon Entropy and Shannon Differentiation between Subpopulations for Neutral Genes under the Finite Island Model.

    PubMed

    Chao, Anne; Jost, Lou; Hsieh, T C; Ma, K H; Sherwin, William B; Rollins, Lee Ann

    2015-01-01

    Shannon entropy H and related measures are increasingly used in molecular ecology and population genetics because (1) unlike measures based on heterozygosity or allele number, these measures weigh alleles in proportion to their population fraction, thus capturing a previously-ignored aspect of allele frequency distributions that may be important in many applications; (2) these measures connect directly to the rich predictive mathematics of information theory; (3) Shannon entropy is completely additive and has an explicitly hierarchical nature; and (4) Shannon entropy-based differentiation measures obey strong monotonicity properties that heterozygosity-based measures lack. We derive simple new expressions for the expected values of the Shannon entropy of the equilibrium allele distribution at a neutral locus in a single isolated population under two models of mutation: the infinite allele model and the stepwise mutation model. Surprisingly, this complex stochastic system for each model has an entropy expressable as a simple combination of well-known mathematical functions. Moreover, entropy- and heterozygosity-based measures for each model are linked by simple relationships that are shown by simulations to be approximately valid even far from equilibrium. We also identify a bridge between the two models of mutation. We apply our approach to subdivided populations which follow the finite island model, obtaining the Shannon entropy of the equilibrium allele distributions of the subpopulations and of the total population. We also derive the expected mutual information and normalized mutual information ("Shannon differentiation") between subpopulations at equilibrium, and identify the model parameters that determine them. We apply our measures to data from the common starling (Sturnus vulgaris) in Australia. Our measures provide a test for neutrality that is robust to violations of equilibrium assumptions, as verified on real world data from starlings.

  11. Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)

    DTIC Science & Technology

    2008-03-01

    4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python

  12. A Simple Model of Global Aerosol Indirect Effects

    NASA Technical Reports Server (NTRS)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  13. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  14. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  15. Road simulation for four-wheel vehicle whole input power spectral density

    NASA Astrophysics Data System (ADS)

    Wang, Jiangbo; Qiang, Baomin

    2017-05-01

    As the vibration of running vehicle mainly comes from road and influence vehicle ride performance. So the road roughness power spectral density simulation has great significance to analyze automobile suspension vibration system parameters and evaluate ride comfort. Firstly, this paper based on the mathematical model of road roughness power spectral density, established the integral white noise road random method. Then in the MATLAB/Simulink environment, according to the research method of automobile suspension frame from simple two degree of freedom single-wheel vehicle model to complex multiple degrees of freedom vehicle model, this paper built the simple single incentive input simulation model. Finally the spectrum matrix was used to build whole vehicle incentive input simulation model. This simulation method based on reliable and accurate mathematical theory and can be applied to the random road simulation of any specified spectral which provides pavement incentive model and foundation to vehicle ride performance research and vibration simulation.

  16. Novel method of vulnerability assessment of simple landfills area using the multimedia, multipathway and multireceptor risk assessment (3MRA) model, China.

    PubMed

    Yuan, Ying; He, Xiao-Song; Xi, Bei-Dou; Wei, Zi-Min; Tan, Wen-Bing; Gao, Ru-Tai

    2016-11-01

    Vulnerability assessment of simple landfills was conducted using the multimedia, multipathway and multireceptor risk assessment (3MRA) model for the first time in China. The minimum safe threshold of six contaminants (benzene, arsenic (As), cadmium (Cd), hexavalent chromium [Cr(VI)], divalent mercury [Hg(II)] and divalent nickel [Ni(II)]) in landfill and waste pile models were calculated by the 3MRA model. Furthermore, the vulnerability indexes of the six contaminants were predicted based on the model calculation. The results showed that the order of health risk vulnerability index was As > Hg(II) > Cr(VI) > benzene > Cd > Ni(II) in the landfill model, whereas the ecology risk vulnerability index was in the order of As > Hg(II) > Cr(VI) > Cd > benzene > Ni(II). In the waste pile model, the order of health risk vulnerability index was benzene > Hg(II) > Cr(VI) > As > Cd and Ni(II), whereas the ecology risk vulnerability index was in the order of Hg(II) > Cd > Cr(VI) > As > benzene > Ni(II). These results indicated that As, Hg(II) and Cr(VI) were the high risk contaminants for the case of a simple landfill in China; the concentration of these in soil and groundwater around the simple landfill should be strictly monitored, and proper mediation is also recommended for simple landfills with a high concentration of contaminants. © The Author(s) 2016.

  17. Water balance models in one-month-ahead streamflow forecasting

    USGS Publications Warehouse

    Alley, William M.

    1985-01-01

    Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.

  18. ESTIMATION OF GROUNDWATER POLLUTION POTENTIAL BY PESTICIDES IN MID-ATLANTIC COASTAL PLAIN WATERSHEDS

    EPA Science Inventory

    A simple GIS-based transport model to estimate the potential for groundwater pollution by pesticides has been developed within the ArcView GIS environment. The pesticide leaching analytical model, which is based on one-dimensional advective-dispersive-reactive (ADR) transport, ha...

  19. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  20. Development and Application of a Simple Hydrogeomorphic Model for Headwater Catchments

    EPA Science Inventory

    We developed a catchment model based on a hydrogeomorphic concept that simulates discharge from channel-riparian complexes, zero-order basins (ZOB, basins ZB and FA), and hillslopes. Multitank models simulate ZOB and hillslope hydrological response, while kinematic wave models pr...

  1. Pilot/vehicle model analysis of visually guided flight

    NASA Technical Reports Server (NTRS)

    Zacharias, Greg L.

    1991-01-01

    Information is given in graphical and outline form on a pilot/vehicle model description, control of altitude with simple terrain clues, simulated flight with visual scene delays, model-based in-cockpit display design, and some thoughts on the role of pilot/vehicle modeling.

  2. A simple inertial model for Neptune's zonal circulation

    NASA Technical Reports Server (NTRS)

    Allison, Michael; Lumetta, James T.

    1990-01-01

    Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.

  3. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    PubMed

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  4. A simple non-Markovian computational model of the statistics of soccer leagues: Emergence and scaling effects

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Vainstein, Mendeli H.; Lamb, Luis C.; Prado, Sandra D.

    2013-03-01

    We propose a novel probabilistic model that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential (ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a team future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileirão) if the starting potential is the same for all teams. Other leagues such as the Italian (Calcio) and the Spanish (La Liga) tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model with simple initial conditions. However, we show that by setting the initial abilities based on data from previous tournaments, our model is able to capture the stylized statistical features of double round robin system (DRRS) tournaments in general. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: here several teams have been crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserve the Gaussian traces during the tournament. On the other hand, in the Italian and Spanish cases, only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brazilian tournament “Brasileirão” cannot be reproduced. This shows that the evolutionary aspects are not superfluous and play an important role which must be considered in other alternative models. Finally, we analyze the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.

  5. Second Graders' Emerging Particle Models of Matter in the Context of Learning through Model-Based Inquiry

    ERIC Educational Resources Information Center

    Samarapungavan, Ala; Bryan, Lynn; Wills, Jamison

    2017-01-01

    In this paper, we present a study of second graders' learning about the nature of matter in the context of content-rich, model-based inquiry instruction. The goal of instruction was to help students learn to use simple particle models to explain states of matter and phase changes. We examined changes in students' ideas about matter, the coherence…

  6. Utility of an automated thermal-based approach for monitoring evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    A very simple remote sensing-based model for water use monitoring is presented. The model acronym DATTUTDUT, (Deriving Atmosphere Turbulent Transport Useful To Dummies Using Temperature) is a Dutch word which loosely translates as “It’s unbelievable that it works”. DATTUTDUT is fully automated and o...

  7. A thermal-based remote sensing modeling system for estimating evapotranspiration from field to global scales

    USDA-ARS?s Scientific Manuscript database

    Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. This paper describes a robust but relatively simple thermal-based energy balance model that parameterizes the key soil/s...

  8. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  9. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  10. Evidence integration in model-based tree search

    PubMed Central

    Solway, Alec; Botvinick, Matthew M.

    2015-01-01

    Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932

  11. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  12. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  13. Groundwater modelling in decision support: reflections on a unified conceptual framework

    NASA Astrophysics Data System (ADS)

    Doherty, John; Simmons, Craig T.

    2013-11-01

    Groundwater models are commonly used as basis for environmental decision-making. There has been discussion and debate in recent times regarding the issue of model simplicity and complexity. This paper contributes to this ongoing discourse. The selection of an appropriate level of model structural and parameterization complexity is not a simple matter. Although the metrics on which such selection should be based are simple, there are many competing, and often unquantifiable, considerations which must be taken into account as these metrics are applied. A unified conceptual framework is introduced and described which is intended to underpin groundwater modelling in decision support with a direct focus on matters regarding model simplicity and complexity.

  14. A Simple Climate Model Program for High School Education

    NASA Astrophysics Data System (ADS)

    Dommenget, D.

    2012-04-01

    The future climate change projections of the IPCC AR4 are based on GCM simulations, which give a distinct global warming pattern, with an arctic winter amplification, an equilibrium land sea contrast and an inter-hemispheric warming gradient. While these simulations are the most important tool of the IPCC predictions, the conceptual understanding of these predicted structures of climate change are very difficult to reach if only based on these highly complex GCM simulations and they are not accessible for ordinary people. In this study presented here we will introduce a very simple gridded globally resolved energy balance model based on strongly simplified physical processes, which is capable of simulating the main characteristics of global warming. The model shall give a bridge between the 1-dimensional energy balance models and the fully coupled 4-dimensional complex GCMs. It runs on standard PC computers computing globally resolved climate simulation with 2yrs per second or 100,000yrs per day. The program can compute typical global warming scenarios in a few minutes on a standard PC. The computer code is only 730 line long with very simple formulations that high school students should be able to understand. The simple model's climate sensitivity and the spatial structure of the warming pattern is within the uncertainties of the IPCC AR4 models simulations. It is capable of simulating the arctic winter amplification, the equilibrium land sea contrast and the inter-hemispheric warming gradient with good agreement to the IPCC AR4 models in amplitude and structure. The program can be used to do sensitivity studies in which students can change something (e.g. reduce the solar radiation, take away the clouds or make snow black) and see how it effects the climate or the climate response to changes in greenhouse gases. This program is available for every one and could be the basis for high school education. Partners for a high school project are wanted!

  15. Synapse fits neuron: joint reduction by model inversion.

    PubMed

    van der Scheer, H T; Doelman, A

    2017-08-01

    In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.

  16. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  17. A simple-source model of military jet aircraft noise

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.

    2010-10-01

    The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.

  18. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  19. Intra-individual reaction time variability and all-cause mortality over 17 years: a community-based cohort study.

    PubMed

    Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen

    2014-01-01

    very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.

  20. Single Canonical Model of Reflexive Memory and Spatial Attention

    PubMed Central

    Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.

    2015-01-01

    Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949

  1. Application of experiential learning model using simple physical kit to increase attitude toward physics student senior high school in fluid

    NASA Astrophysics Data System (ADS)

    Johari, A. H.; Muslim

    2018-05-01

    Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.

  2. Mining Peripheral Arterial Disease Cases from Narrative Clinical Notes Using Natural Language Processing

    PubMed Central

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.

    2016-01-01

    Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359

  3. PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS

    EPA Science Inventory

    Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...

  4. [A new model fo the evaluation of measurements of the neurocranium].

    PubMed

    Seidler, H; Wilfing, H; Weber, G; Traindl-Prohazka, M; zur Nedden, D; Platzer, W

    1993-12-01

    A simple and user-friendly model for trigonometric description of the neurocranium based on newly defined points of measurement is presented. This model not only provides individual description, but also allows for an evaluation of developmental and phylogenetic aspects.

  5. A simple, mass balance model of carbon flow in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Garland, Jay L.

    1989-01-01

    Internal cycling of chemical elements is a fundamental aspect of a Controlled Ecological Life Support System (CELSS). Mathematical models are useful tools for evaluating fluxes and reservoirs of elements associated with potential CELSS configurations. A simple mass balance model of carbon flow in CELSS was developed based on data from the CELSS Breadboard project at Kennedy Space Center. All carbon reservoirs and fluxes were calculated based on steady state conditions and modelled using linear, donor-controlled transfer coefficients. The linear expression of photosynthetic flux was replaced with Michaelis-Menten kinetics based on dynamical analysis of the model which found that the latter produced more adequate model output. Sensitivity analysis of the model indicated that accurate determination of the maximum rate of gross primary production is critical to the development of an accurate model of carbon flow. Atmospheric carbon dioxide was particularly sensitive to changes in photosynthetic rate. The small reservoir of CO2 relative to large CO2 fluxes increases the potential for volatility in CO2 concentration. Feedback control mechanisms regulating CO2 concentration will probably be necessary in a CELSS to reduce this system instability.

  6. Using the [beta][subscript 2]-Adrenoceptor for Structure-Based Drug Design

    ERIC Educational Resources Information Center

    Manallack, David T.; Chalmers, David K.; Yuriev, Elizabeth

    2010-01-01

    The topics of molecular modeling and drug design are studied in a medicinal chemistry course. The recently reported structures of several G protein-coupled receptors (GPCR) with bound ligands have been used to develop a simple computer-based experiment employing molecular-modeling software. Knowledge of the specific interactions between a ligand…

  7. Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models

    ERIC Educational Resources Information Center

    Dickes, Amanda Catherine; Sengupta, Pratim

    2013-01-01

    In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…

  8. Integrated control strategy for autonomous decentralized conveyance systems based on distributed MEMS arrays

    NASA Astrophysics Data System (ADS)

    Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki

    2004-07-01

    In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.

  9. Analysis of aircraft longitudinal handling qualities

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  10. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  11. Inattentive Drivers: Making the Solution Method the Model

    ERIC Educational Resources Information Center

    McCartney, Mark

    2003-01-01

    A simple car following model based on the solution of coupled ordinary differential equations is considered. The model is solved using Euler's method and this method of solution is itself interpreted as a mathematical model for car following. Examples of possible classroom use are given. (Contains 6 figures.)

  12. Forecasting Pell Program Applications Using Structural Aggregate Models.

    ERIC Educational Resources Information Center

    Cavin, Edward S.

    1995-01-01

    Demand for Pell Grant financial aid has become difficult to predict when using the current microsimulation model. This paper proposes an alternative model that uses aggregate data (based on individuals' microlevel decisions and macrodata on family incomes, college costs, and opportunity wages) and avoids some limitations of simple linear models.…

  13. A dynamical systems approach to actin-based motility in Listeria monocytogenes

    NASA Astrophysics Data System (ADS)

    Hotton, S.

    2010-11-01

    A simple kinematic model for the trajectories of Listeria monocytogenes is generalized to a dynamical system rich enough to exhibit the resonant Hopf bifurcation structure of excitable media and simple enough to be studied geometrically. It is shown how L. monocytogenes trajectories and meandering spiral waves are organized by the same type of attracting set.

  14. Bioluminescence Risk Detection Aid

    DTIC Science & Technology

    2010-01-01

    Delivery Vehicle, or diver) bioluminescence, based on local environmental data, in-situ measurements, and simple radiative transfer models. This work...vehicle diving to 5.5 m. Green = REMUS vehicle diving to 6.5 m. Observations were corrected for the angle of observation. IMPACT /APPLICATIONS...will sense vehicle-stimulated bioluminesce, measure local environmental conditions and ingest the information to solve a simple radiative transfer

  15. A simple model for studying rotation errors of gimbal mount axes in laser tracking system based on spherical mirror as a reflection unit

    NASA Astrophysics Data System (ADS)

    Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang

    2018-01-01

    This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.

  16. Predictive Analytics In Healthcare: Medications as a Predictor of Medical Complexity.

    PubMed

    Higdon, Roger; Stewart, Elizabeth; Roach, Jared C; Dombrowski, Caroline; Stanberry, Larissa; Clifton, Holly; Kolker, Natali; van Belle, Gerald; Del Beccaro, Mark A; Kolker, Eugene

    2013-12-01

    Children with special healthcare needs (CSHCN) require health and related services that exceed those required by most hospitalized children. A small but growing and important subset of the CSHCN group includes medically complex children (MCCs). MCCs typically have comorbidities and disproportionately consume healthcare resources. To enable strategic planning for the needs of MCCs, simple screens to identify potential MCCs rapidly in a hospital setting are needed. We assessed whether the number of medications used and the class of those medications correlated with MCC status. Retrospective analysis of medication data from the inpatients at Seattle Children's Hospital found that the numbers of inpatient and outpatient medications significantly correlated with MCC status. Numerous variables based on counts of medications, use of individual medications, and use of combinations of medications were considered, resulting in a simple model based on three different counts of medications: outpatient and inpatient drug classes and individual inpatient drug names. The combined model was used to rank the patient population for medical complexity. As a result, simple, objective admission screens for predicting the complexity of patients based on the number and type of medications were implemented.

  17. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  18. Simple Harmonics Motion experiment based on LabVIEW interface for Arduino

    NASA Astrophysics Data System (ADS)

    Tong-on, Anusorn; Saphet, Parinya; Thepnurat, Meechai

    2017-09-01

    In this work, we developed an affordable modern innovative physics lab apparatus. The ultrasonic sensor is used to measure the position of a mass attached on a spring as a function of time. The data acquisition system and control device were developed based on LabVIEW interface for Arduino UNO R3. The experiment was designed to explain wave propagation which is modeled by simple harmonic motion. The simple harmonic system (mass and spring) was observed and the motion can be realized using curve fitting to the wave equation in Mathematica. We found that the spring constants provided by Hooke’s law and the wave equation fit are 9.9402 and 9.1706 N/m, respectively.

  19. Modeling Hidden Circuits: An Authentic Research Experience in One Lab Period

    NASA Astrophysics Data System (ADS)

    Moore, J. Christopher; Rubbo, Louis J.

    2016-10-01

    Two wires exit a black box that has three exposed light bulbs connected together in an unknown configuration. The task for students is to determine the circuit configuration without opening the box. In the activity described in this paper, we navigate students through the process of making models, developing and conducting experiments that can support or falsify models, and confronting ways of distinguishing between two different models that make similar predictions. We also describe a twist that forces students to confront new phenomena, requiring revision of their mental model of electric circuits. This activity is designed to mirror the practice of science by actual scientists and expose students to the "messy" side of science, where our simple explanations of reality often require expansion and/or revision based on new evidence. The purpose of this paper is to present a simple classroom activity within the context of electric circuits that supports students as they learn to test hypotheses and refine and revise models based on evidence.

  20. Developing model asphalt systems using molecular simulation : final model.

    DOT National Transportation Integrated Search

    2009-09-01

    Computer based molecular simulations have been used towards developing simple mixture compositions whose : physical properties resemble those of real asphalts. First, Monte Carlo simulations with the OPLS all-atom force : field were used to predict t...

  1. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  2. The Thermodynamic Limit in Mean Field Spin Glass Models

    NASA Astrophysics Data System (ADS)

    Guerra, Francesco; Toninelli, Fabio Lucio

    We present a simple strategy in order to show the existence and uniqueness of the infinite volume limit of thermodynamic quantities, for a large class of mean field disordered models, as for example the Sherrington-Kirkpatrick model, and the Derrida p-spin model. The main argument is based on a smooth interpolation between a large system, made of N spin sites, and two similar but independent subsystems, made of N1 and N2 sites, respectively, with N1+N2=N. The quenched average of the free energy turns out to be subadditive with respect to the size of the system. This gives immediately convergence of the free energy per site, in the infinite volume limit. Moreover, a simple argument, based on concentration of measure, gives the almost sure convergence, with respect to the external noise. Similar results hold also for the ground state energy per site.

  3. Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect

    NASA Astrophysics Data System (ADS)

    Wilson, Derrek J.; Gea-Banacloche, Julio

    2012-04-01

    We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.

  4. Fire and Heat Spreading Model Based on Cellular Automata Theory

    NASA Astrophysics Data System (ADS)

    Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.

    2018-05-01

    The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.

  5. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  6. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  7. Use of a land-use-based emissions inventory in delineating clean-air zones

    Treesearch

    Victor S. Fahrer; Howard A. Peters

    1977-01-01

    Use of a land-use-based emissions inventory from which air-pollution estimates can be projected was studied. First the methodology used to establish a land-use-based emission inventory is described. Then this inventory is used as input in a simple model that delineates clean air and buffer zones. The model is applied to the town of Burlington, Massachusetts....

  8. STEMS-Air: a simple GIS-based air pollution dispersion model for city-wide exposure assessment.

    PubMed

    Gulliver, John; Briggs, David

    2011-05-15

    Current methods of air pollution modelling do not readily meet the needs of air pollution mapping for short-term (i.e. daily) exposure studies. The main limiting factor is that for those few models that couple with a GIS there are insufficient tools for directly mapping air pollution both at high spatial resolution and over large areas (e.g. city wide). A simple GIS-based air pollution model (STEMS-Air) has been developed for PM(10) to meet these needs with the option to choose different exposure averaging periods (e.g. daily and annual). STEMS-Air uses the grid-based FOCALSUM function in ArcGIS in conjunction with a fine grid of emission sources and basic information on meteorology to implement a simple Gaussian plume model of air pollution dispersion. STEMS-Air was developed and validated in London, UK, using data on concentrations of PM(10) from routinely available monitoring data. Results from the validation study show that STEMS-Air performs well in predicting both daily (at four sites) and annual (at 30 sites) concentrations of PM(10). For daily modelling, STEMS-Air achieved r(2) values in the range 0.19-0.43 (p<0.001) based solely on traffic-related emissions and r(2) values in the range 0.41-0.63 (p<0.001) when adding information on 'background' levels of PM(10). For annual modelling of PM(10), the model returned r(2) in the range 0.67-0.77 (P<0.001) when compared with monitored concentrations. The model can thus be used for rapid production of daily or annual city-wide air pollution maps either as a screening process in urban air quality planning and management, or as the basis for health risk assessment and epidemiological studies. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  9. Brief introductory guide to agent-based modeling and an illustration from urban health research.

    PubMed

    Auchincloss, Amy H; Garcia, Leandro Martin Totaro

    2015-11-01

    There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation.

  10. Brief introductory guide to agent-based modeling and an illustration from urban health research

    PubMed Central

    Auchincloss, Amy H.; Garcia, Leandro Martin Totaro

    2017-01-01

    There is growing interest among urban health researchers in addressing complex problems using conceptual and computation models from the field of complex systems. Agent-based modeling (ABM) is one computational modeling tool that has received a lot of interest. However, many researchers remain unfamiliar with developing and carrying out an ABM, hindering the understanding and application of it. This paper first presents a brief introductory guide to carrying out a simple agent-based model. Then, the method is illustrated by discussing a previously developed agent-based model, which explored inequalities in diet in the context of urban residential segregation. PMID:26648364

  11. Logic-Based Models for the Analysis of Cell Signaling Networks†

    PubMed Central

    2010-01-01

    Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868

  12. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  13. Maintenance of algal endosymbionts in Paramecium bursaria: a simple model based on population dynamics.

    PubMed

    Iwai, Sosuke; Fujiwara, Kenji; Tamura, Takuro

    2016-09-01

    Algal endosymbiosis is widely distributed in eukaryotes including many protists and metazoans, and plays important roles in aquatic ecosystems, combining phagotrophy and phototrophy. To maintain a stable symbiotic relationship, endosymbiont population size in the host must be properly regulated and maintained at a constant level; however, the mechanisms underlying the maintenance of algal endosymbionts are still largely unknown. Here we investigate the population dynamics of the unicellular ciliate Paramecium bursaria and its Chlorella-like algal endosymbiont under various experimental conditions in a simple culture system. Our results suggest that endosymbiont population size in P. bursaria was not regulated by active processes such as cell division coupling between the two organisms, or partitioning of the endosymbionts at host cell division. Regardless, endosymbiont population size was eventually adjusted to a nearly constant level once cells were grown with light and nutrients. To explain this apparent regulation of population size, we propose a simple mechanism based on the different growth properties (specifically the nutrient requirements) of the two organisms, and based from this develop a mathematical model to describe the population dynamics of host and endosymbiont. The proposed mechanism and model may provide a basis for understanding the maintenance of algal endosymbionts. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  14. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  15. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  16. The Dairy Greenhouse Gas Emission Model: Reference Manual

    USDA-ARS?s Scientific Manuscript database

    The Dairy Greenhouse Gas Model (DairyGHG) is a software tool for estimating the greenhouse gas emissions and carbon footprint of dairy production systems. A relatively simple process-based model is used to predict the primary greenhouse gas emissions, which include the net emission of carbon dioxide...

  17. Exploring simple, transparent, interpretable and predictive QSAR models for classification and quantitative prediction of rat toxicity of ionic liquids using OECD recommended guidelines.

    PubMed

    Das, Rudra Narayan; Roy, Kunal; Popelier, Paul L A

    2015-11-01

    The present study explores the chemical attributes of diverse ionic liquids responsible for their cytotoxicity in a rat leukemia cell line (IPC-81) by developing predictive classification as well as regression-based mathematical models. Simple and interpretable descriptors derived from a two-dimensional representation of the chemical structures along with quantum topological molecular similarity indices have been used for model development, employing unambiguous modeling strategies that strictly obey the guidelines of the Organization for Economic Co-operation and Development (OECD) for quantitative structure-activity relationship (QSAR) analysis. The structure-toxicity relationships that emerged from both classification and regression-based models were in accordance with the findings of some previous studies. The models suggested that the cytotoxicity of ionic liquids is dependent on the cationic surfactant action, long alkyl side chains, cationic lipophilicity as well as aromaticity, the presence of a dialkylamino substituent at the 4-position of the pyridinium nucleus and a bulky anionic moiety. The models have been transparently presented in the form of equations, thus allowing their easy transferability in accordance with the OECD guidelines. The models have also been subjected to rigorous validation tests proving their predictive potential and can hence be used for designing novel and "greener" ionic liquids. The major strength of the present study lies in the use of a diverse and large dataset, use of simple reproducible descriptors and compliance with the OECD norms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Modelling the Active Hearing Process in Mosquitoes

    NASA Astrophysics Data System (ADS)

    Avitabile, Daniele; Homer, Martin; Jackson, Joe; Robert, Daniel; Champneys, Alan

    2011-11-01

    A simple microscopic mechanistic model is described of the active amplification within the Johnston's organ of the mosquito species Toxorhynchites brevipalpis. The model is based on the description of the antenna as a forced-damped oscillator coupled to a set of active threads (ensembles of scolopidia) that provide an impulsive force when they twitch. This twitching is in turn controlled by channels that are opened and closed if the antennal oscillation reaches a critical amplitude. The model matches both qualitatively and quantitatively with recent experiments. New results are presented using mathematical homogenization techniques to derive a mesoscopic model as a simple oscillator with nonlinear force and damping characteristics. It is shown how the results from this new model closely resemble those from the microscopic model as the number of threads approach physiologically correct values.

  19. Mining peripheral arterial disease cases from narrative clinical notes using natural language processing.

    PubMed

    Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M

    2017-06-01

    Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Modelling of nanoscale quantum tunnelling structures using algebraic topology method

    NASA Astrophysics Data System (ADS)

    Sankaran, Krishnaswamy; Sairam, B.

    2018-05-01

    We have modelled nanoscale quantum tunnelling structures using Algebraic Topology Method (ATM). The accuracy of ATM is compared to the analytical solution derived based on the wave nature of tunnelling electrons. ATM provides a versatile, fast, and simple model to simulate complex structures. We are currently expanding the method for modelling electrodynamic systems.

  1. Forgetting in Immediate Serial Recall: Decay, Temporal Distinctiveness, or Interference?

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-01-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively.…

  2. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  3. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  4. Physical models of collective cell motility: from cell to tissue

    NASA Astrophysics Data System (ADS)

    Camley, B. A.; Rappel, W.-J.

    2017-03-01

    In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.

  5. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  6. Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2010-01-01

    This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…

  7. Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.

    PubMed

    Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J

    2015-01-01

    The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.

  8. Bak-Sneppen model: Local equilibrium and critical value.

    PubMed

    Fraiman, Daniel

    2018-04-01

    The Bak-Sneppen (BS) model is a very simple model that exhibits all the richness of self-organized criticality theory. At the thermodynamic limit, the BS model converges to a situation where all particles have a fitness that is uniformly distributed between a critical value p_{c} and 1. The p_{c} value is unknown, as are the variables that influence and determine this value. Here we study the BS model in the case in which the lowest fitness particle interacts with an arbitrary even number of m nearest neighbors. We show that p_{c} verifies a simple local equilibrium relation. Based on this relation, we can determine bounds for p_{c} of the BS model and exact results for some BS-like models. Finally, we show how transformations of the original BS model can be done without altering the model's complex dynamics.

  9. Bak-Sneppen model: Local equilibrium and critical value

    NASA Astrophysics Data System (ADS)

    Fraiman, Daniel

    2018-04-01

    The Bak-Sneppen (BS) model is a very simple model that exhibits all the richness of self-organized criticality theory. At the thermodynamic limit, the BS model converges to a situation where all particles have a fitness that is uniformly distributed between a critical value pc and 1. The pc value is unknown, as are the variables that influence and determine this value. Here we study the BS model in the case in which the lowest fitness particle interacts with an arbitrary even number of m nearest neighbors. We show that pc verifies a simple local equilibrium relation. Based on this relation, we can determine bounds for pc of the BS model and exact results for some BS-like models. Finally, we show how transformations of the original BS model can be done without altering the model's complex dynamics.

  10. Luminance-model-based DCT quantization for color image compression

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Peterson, Heidi A.

    1992-01-01

    A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).

  11. Forecasting plant phenology: evaluating the phenological models for Betula pendula and Padus racemosa spring phases, Latvia.

    PubMed

    Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta

    2015-02-01

    A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.

  12. Low Reynolds number two-equation modeling of turbulent flows

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Shih, T.-H.

    1991-01-01

    A k-epsilon model that accounts for viscous and wall effects is presented. The proposed formulation does not contain the local wall distance thereby making very simple the application to complex geometries. The formulation is based on an existing k-epsilon model that proved to fit very well with the results of direct numerical simulation. The new form is compared with nine different two-equation models and with direct numerical simulation for a fully developed channel flow at Re = 3300. The simple flow configuration allows a comparison free from numerical inaccuracies. The computed results prove that few of the considered forms exhibit a satisfactory agreement with the channel flow data. The model shows an improvement with respect to the existing formulations.

  13. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  14. Modelling of capillary-driven flow for closed paper-based microfluidic channels

    NASA Astrophysics Data System (ADS)

    Songok, Joel; Toivakka, Martti

    2017-06-01

    Paper-based microfluidics is an emerging field focused on creating inexpensive devices, with simple fabrication methods for applications in various fields including healthcare, environmental monitoring and veterinary medicine. Understanding the flow of liquid is important in achieving consistent operation of the devices. This paper proposes capillary models to predict flow in paper-based microfluidic channels, which include a flow accelerating hydrophobic top cover. The models, which consider both non-absorbing and absorbing substrates, are in good agreement with the experimental results.

  15. Dynamic calibration of agent-based models using data assimilation.

    PubMed

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  16. Exploring the limits of case-to-capsule ratio, pulse length, and picket energy for symmetric hohlraum drive on the National Ignition Facility Laser

    NASA Astrophysics Data System (ADS)

    Callahan, D. A.; Hurricane, O. A.; Ralph, J. E.; Thomas, C. A.; Baker, K. L.; Benedetti, L. R.; Berzak Hopkins, L. F.; Casey, D. T.; Chapman, T.; Czajka, C. E.; Dewald, E. L.; Divol, L.; Döppner, T.; Hinkel, D. E.; Hohenberger, M.; Jarrott, L. C.; Khan, S. F.; Kritcher, A. L.; Landen, O. L.; LePape, S.; MacLaren, S. A.; Masse, L. P.; Meezan, N. B.; Pak, A. E.; Salmonson, J. D.; Woods, D. T.; Izumi, N.; Ma, T.; Mariscal, D. A.; Nagel, S. R.; Kline, J. L.; Kyrala, G. A.; Loomis, E. N.; Yi, S. A.; Zylstra, A. B.; Batha, S. H.

    2018-05-01

    We present a data-based model for low mode asymmetry in low gas-fill hohlraum experiments on the National Ignition Facility {NIF [Moses et al., Fusion Sci. Technol. 69, 1 (2016)]} laser. This model is based on the hypothesis that the asymmetry in these low fill hohlraums is dominated by the hydrodynamics of the expanding, low density, high-Z (gold or uranium) "bubble," which occurs where the intense outer cone laser beams hit the high-Z hohlraum wall. We developed a simple model which states that the implosion symmetry becomes more oblate as the high-Z bubble size becomes large compared to the hohlraum radius or the capsule size becomes large compared to the hohlraum radius. This simple model captures the trends that we see in data that span much of the parameter space of interest for NIF ignition experiments. We are now using this model as a constraint on new designs for experiments on the NIF.

  17. Application of simple negative feedback model for avalanche photodetectors investigation

    NASA Astrophysics Data System (ADS)

    Kushpil, V. V.

    2009-10-01

    A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.

  18. CMSAF products Cloud Fraction Coverage and Cloud Type used for solar global irradiance estimation

    NASA Astrophysics Data System (ADS)

    Badescu, Viorel; Dumitrescu, Alexandru

    2016-08-01

    Two products provided by the climate monitoring satellite application facility (CMSAF) are the instantaneous Cloud Fractional Coverage (iCFC) and the instantaneous Cloud Type (iCTY) products. Previous studies based on the iCFC product show that the simple solar radiation models belonging to the cloudiness index class n CFC = 0.1-1.0 have rRMSE values ranging between 68 and 71 %. The products iCFC and iCTY are used here to develop simple models providing hourly estimates for solar global irradiance. Measurements performed at five weather stations of Romania (South-Eastern Europe) are used. Two three-class characterizations of the state-of-the-sky, based on the iCTY product, are defined. In case of the first new sky state classification, which is roughly related with cloud altitude, the solar radiation models proposed here perform worst for the iCTY class 4-15, with rRMSE values ranging between 46 and 57 %. The spreading error of the simple models is lower than that of the MAGIC model for the iCTY classes 1-4 and 15-19, but larger for iCTY classes 4-15. In case of the second new sky state classification, which takes into account in a weighted manner the chance for the sun to be covered by different types of clouds, the solar radiation models proposed here perform worst for the cloudiness index class n CTY = 0.7-0.1, with rRMSE values ranging between 51 and 66 %. Therefore, the two new sky state classifications based on the iCTY product are useful in increasing the accuracy of solar radiation models.

  19. Modeling the Effects of Beam Size and Flaw Morphology on Ultrasonic Pulse/Echo Sizing of Delaminations in Carbon Composites

    NASA Technical Reports Server (NTRS)

    Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan

    2012-01-01

    The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.

  20. Matter Gravitates, but Does Gravity Matter?

    ERIC Educational Resources Information Center

    Groetsch, C. W.

    2011-01-01

    The interplay of physical intuition, computational evidence, and mathematical rigor in a simple trajectory model is explored. A thought experiment based on the model is used to elicit student conjectures on the influence of a physical parameter; a mathematical model suggests a computational investigation of the conjectures, and rigorous analysis…

  1. Towards a social psychology-based microscopic model of driver behavior and decision-making : modifying Lewin's field theory

    DOT National Transportation Integrated Search

    2014-01-01

    Central to effective roadway design is the ability to understand how drivers behave as they traverse a segment of : roadway. While simple and complex microscopic models have been used over the years to analyse driver behaviour, : most models: 1.) inc...

  2. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  3. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  4. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  5. Simple radiative transfer model for relationships between canopy biomass and reflectance

    NASA Technical Reports Server (NTRS)

    Park, J. K.; Deering, D. W.

    1982-01-01

    A modified Kubelka-Munk model has been utilized to derive useful equations for the analysis of apparent canopy reflectance. Based on the solution to the model simple working equations were formulated by employing reflectance characteristic parameters. The relationships derived show the asymptotic nature of reflectance data that is typically observed in remote sensing studies of plant biomass. They also establish the range of expected apparent canopy reflectance values for specific plant canopy types. The usefulness of the simplified equations was demonstrated by the exceptionally close fit of the theoretical curves to two separately acquired data sets for alfalfa and shortgrass prairie canopies.

  6. Influence of collision on the flow through in-vitro rigid models of the vocal folds

    NASA Astrophysics Data System (ADS)

    Deverge, M.; Pelorson, X.; Vilain, C.; Lagrée, P.-Y.; Chentouf, F.; Willems, J.; Hirschberg, A.

    2003-12-01

    Measurements of pressure in oscillating rigid replicas of vocal folds are presented. The pressure upstream of the replica is used as input to various theoretical approximations to predict the pressure within the glottis. As the vocal folds collide the classical quasisteady boundary layer theory fails. It appears however that for physiologically reasonable shapes of the replicas, viscous effects are more important than the influence of the flow unsteadiness due to the wall movement. A simple model based on a quasisteady Bernoulli equation corrected for viscous effect, combined with a simple boundary layer separation model does globally predict the observed pressure behavior.

  7. A simple statistical model for geomagnetic reversals

    NASA Technical Reports Server (NTRS)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  8. A simple branching model that reproduces language family and language population distributions

    NASA Astrophysics Data System (ADS)

    Schwämmle, Veit; de Oliveira, Paulo Murilo Castro

    2009-07-01

    Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.

  9. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  10. Simple nonlinear modelling of earthquake response in torsionally coupled R/C structures: A preliminary study

    NASA Astrophysics Data System (ADS)

    Saiidi, M.

    1982-07-01

    The equivalent of a single degree of freedom (SDOF) nonlinear model, the Q-model-13, was examined. The study intended to: (1) determine the seismic response of a torsionally coupled building based on the multidegree of freedom (MDOF) and (SDOF) nonlinear models; and (2) develop a simple SDOF nonlinear model to calculate displacement history of structures with eccentric centers of mass and stiffness. It is shown that planar models are able to yield qualitative estimates of the response of the building. The model is used to estimate the response of a hypothetical six-story frame wall reinforced concrete building with torsional coupling, using two different earthquake intensities. It is shown that the Q-Model-13 can lead to a satisfactory estimate of the response of the structure in both cases.

  11. Experimental investigation and numerical simulation of 3He gas diffusion in simple geometries: implications for analytical models of 3He MR lung morphometry.

    PubMed

    Parra-Robles, J; Ajraoui, S; Deppe, M H; Parnell, S R; Wild, J M

    2010-06-01

    Models of lung acinar geometry have been proposed to analytically describe the diffusion of (3)He in the lung (as measured with pulsed gradient spin echo (PGSE) methods) as a possible means of characterizing lung microstructure from measurement of the (3)He ADC. In this work, major limitations in these analytical models are highlighted in simple diffusion weighted experiments with (3)He in cylindrical models of known geometry. The findings are substantiated with numerical simulations based on the same geometry using finite difference representation of the Bloch-Torrey equation. The validity of the existing "cylinder model" is discussed in terms of the physical diffusion regimes experienced and the basic reliance of the cylinder model and other ADC-based approaches on a Gaussian diffusion behaviour is highlighted. The results presented here demonstrate that physical assumptions of the cylinder model are not valid for large diffusion gradient strengths (above approximately 15 mT/m), which are commonly used for (3)He ADC measurements in human lungs. (c) 2010 Elsevier Inc. All rights reserved.

  12. Modeling error in experimental assays using the bootstrap principle: Understanding discrepancies between assays using different dispensing technologies

    PubMed Central

    Hanson, Sonya M.; Ekins, Sean; Chodera, John D.

    2015-01-01

    All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597

  13. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  14. Construction typification as the tool for optimizing the functioning of a robotized manufacturing system

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Banas, W.; Sekala, A.; Foit, K.; Hryniewicz, P.; Kost, G.

    2015-11-01

    Process of workcell designing is limited by different constructional requirements. They are related to technological parameters of manufactured element, to specifications of purchased elements of a workcell and to technical characteristics of a workcell scene. This shows the complexity of the design-constructional process itself. The results of such approach are individually designed workcell suitable to the specific location and specific production cycle. Changing this parameters one must rebuild the whole configuration of a workcell. Taking into consideration this it is important to elaborate the base of typical elements of a robot kinematic chain that could be used as the tool for building Virtual modelling of kinematic chains of industrial robots requires several preparatory phase. Firstly, it is important to create a database element, which will be models of industrial robot arms. These models could be described as functional primitives that represent elements between components of the kinematic pairs and structural members of industrial robots. A database with following elements is created: the base kinematic pairs, the base robot structural elements, the base of the robot work scenes. The first of these databases includes kinematic pairs being the key component of the manipulator actuator modules. Accordingly, as mentioned previously, it includes the first stage rotary pair of fifth stage. This type of kinematic pairs was chosen due to the fact that it occurs most frequently in the structures of industrial robots. Second base consists of structural robot elements therefore it allows for the conversion of schematic structures of kinematic chains in the structural elements of the arm of industrial robots. It contains, inter alia, the structural elements such as base, stiff members - simple or angular units. They allow converting recorded schematic three-dimensional elements. Last database is a database of scenes. It includes elements of both simple and complex: simple models of technological equipment, conveyors models, models of the obstacles and like that. Using these elements it could be formed various production spaces (robotized workcells), in which it is possible to virtually track the operation of an industrial robot arm modelled in the system.

  15. How Polish Children Switch from One Case to Another when Using Novel Nouns: Challenges for Models of Inflectional Morphology

    ERIC Educational Resources Information Center

    Krajewski, Grzegorz; Theakston, Anna L.; Lieven, Elena V. M.; Tomasello, Michael

    2011-01-01

    The two main models of children's acquisition of inflectional morphology--the Dual-Mechanism approach and the usage-based (schema-based) approach--have both been applied mainly to languages with fairly simple morphological systems. Here we report two studies of 2-3-year-old Polish children's ability to generalise across case-inflectional endings…

  16. Mathematical prediction of core body temperature from environment, activity, and clothing: The heat strain decision aid (HSDA).

    PubMed

    Potter, Adam W; Blanchard, Laurie A; Friedl, Karl E; Cadarette, Bruce S; Hoyt, Reed W

    2017-02-01

    Physiological models provide useful summaries of complex interrelated regulatory functions. These can often be reduced to simple input requirements and simple predictions for pragmatic applications. This paper demonstrates this modeling efficiency by tracing the development of one such simple model, the Heat Strain Decision Aid (HSDA), originally developed to address Army needs. The HSDA, which derives from the Givoni-Goldman equilibrium body core temperature prediction model, uses 16 inputs from four elements: individual characteristics, physical activity, clothing biophysics, and environmental conditions. These inputs are used to mathematically predict core temperature (T c ) rise over time and can estimate water turnover from sweat loss. Based on a history of military applications such as derivation of training and mission planning tools, we conclude that the HSDA model is a robust integration of physiological rules that can guide a variety of useful predictions. The HSDA model is limited to generalized predictions of thermal strain and does not provide individualized predictions that could be obtained from physiological sensor data-driven predictive models. This fully transparent physiological model should be improved and extended with new findings and new challenging scenarios. Published by Elsevier Ltd.

  17. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  18. The nature of the colloidal 'glass' transition.

    PubMed

    Dawson, Kenneth A; Lawlor, A; DeGregorio, Paolo; McCullagh, Gavin D; Zaccarelli, Emanuela; Foffi, Giuseppe; Tartaglia, Piero

    2003-01-01

    The dynamically arrested state of matter is discussed in the context of athermal systems, such as the hard sphere colloidal arrest. We believe that the singular dynamical behaviour near arrest expressed, for example, in how the diffusion constant vanishes may be 'universal', in a sense to be discussed in the paper. Based on this we argue the merits of studying the problem with simple lattice models. This, by analogy with the the critical point of the Ising model, should lead us to clarify the questions, and begin the program of establishing the degree of universality to be expected. We deal only with 'ideal' athermal dynamical arrest transitions, such as those found for hard sphere systems. However, it is argued that dynamically available volume (DAV) is the relevant order parameter of the transition, and that universal mechanisms may be well expressed in terms of DAV. For simple lattice models we give examples of simple laws that emerge near the dynamical arrest, emphasising the idea of a near-ideal gas of 'holes', interacting to give the power law diffusion constant scaling near the arrest. We also seek to open the discussion of the possibility of an underlying weak coupling theory of the dynamical arrest transition, based on DAV.

  19. Model Estimation Using Ridge Regression with the Variance Normalization Criterion. Interim Report No. 2. The Education and Inequality in Canada Project.

    ERIC Educational Resources Information Center

    Lee, Wan-Fung; Bulcock, Jeffrey Wilson

    The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…

  20. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  1. Income Distribution Over Educational Levels: A Simple Model.

    ERIC Educational Resources Information Center

    Tinbergen, Jan

    An econometric model is formulated that explains income per person in various compartments of the labor market defined by three main levels of education and by education required. The model enables an estimation of the effect of increased access to education on that distribution. The model is based on a production for the economy as a whole; a…

  2. "Model-Based Reasoning Is Not a Simple Thing": Investigating Enactment of Modeling in Five High School Biology Classrooms

    ERIC Educational Resources Information Center

    Gaytan, Candice Renee

    2017-01-01

    Modeling is an important scientific practice through which scientists generate, evaluate, and revise scientific knowledge, and it can be translated into science classrooms as a means for engaging students in authentic scientific practice. Much of the research investigating modeling in classrooms focuses on student learning, leaving a gap in…

  3. The problem with simple lumped parameter models: Evidence from tritium mean transit times

    NASA Astrophysics Data System (ADS)

    Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr

    2017-04-01

    Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.

  4. Developing a new intelligent system for the diagnosis of tuberculous pleural effusion.

    PubMed

    Li, Chengye; Hou, Lingxian; Sharma, Bishundat Yanesh; Li, Huaizhong; Chen, ChengShui; Li, Yuping; Zhao, Xuehua; Huang, Hui; Cai, Zhennao; Chen, Huiling

    2018-01-01

    In countries with high prevalence of tuberculosis (TB), clinicians often diagnose tuberculous pleural effusion (TPE) by using diagnostic tests, which have not only poor sensitivity, but poor availability as well. The aim of our study is to develop a new artificial intelligence based diagnostic model that is accurate, fast, non-invasive and cost effective to diagnose TPE. It is expected that a tool derived based on the model be installed on simple computer devices (such as smart phones and tablets) and be used by clinicians widely. For this study, data of 140 patients whose clinical signs, routine blood test results, blood biochemistry markers, pleural fluid cell type and count, and pleural fluid biochemical tests' results were prospectively collected into a database. An Artificial intelligence based diagnostic model, which employs moth flame optimization based support vector machine with feature selection (FS-MFO-SVM), is constructed to predict the diagnosis of TPE. The optimal model results in an average of 95% accuracy (ACC), 0.9564 the area under the receiver operating characteristic curve (AUC), 93.35% sensitivity, and 97.57% specificity for FS-MFO-SVM. The proposed artificial intelligence based diagnostic model is found to be highly reliable for diagnosing TPE based on simple clinical signs, blood samples and pleural effusion samples. Therefore, the proposed model can be widely used in clinical practice and further evaluated for use as a substitute of invasive pleural biopsies. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Graph-Theoretic Properties of Networks Based on Word Association Norms: Implications for Models of Lexical Semantic Memory

    ERIC Educational Resources Information Center

    Gruenenfelder, Thomas M.; Recchia, Gabriel; Rubin, Tim; Jones, Michael N.

    2016-01-01

    We compared the ability of three different contextual models of lexical semantic memory (BEAGLE, Latent Semantic Analysis, and the Topic model) and of a simple associative model (POC) to predict the properties of semantic networks derived from word association norms. None of the semantic models were able to accurately predict all of the network…

  6. A Mechanistic Design Approach for Graphite Nanoplatelet (GNP) Reinforced Asphalt Mixtures for Low-Temperature Applications

    DOT National Transportation Integrated Search

    2018-01-01

    This report explores the application of a discrete computational model for predicting the fracture behavior of asphalt mixtures at low temperatures based on the results of simple laboratory experiments. In this discrete element model, coarse aggregat...

  7. EPA MODELING TOOLS FOR CAPTURE ZONE DELINEATION

    EPA Science Inventory

    The EPA Office of Research and Development supports a step-wise modeling approach for design of wellhead protection areas for water supply wells. A web-based WellHEDSS (wellhead decision support system) is under development for determining when simple capture zones (e.g., centri...

  8. Using Necessary Information to Identify Item Dependence in Passage-Based Reading Comprehension Tests

    ERIC Educational Resources Information Center

    Baldonado, Angela Argo; Svetina, Dubravka; Gorin, Joanna

    2015-01-01

    Applications of traditional unidimensional item response theory models to passage-based reading comprehension assessment data have been criticized based on potential violations of local independence. However, simple rules for determining dependency, such as including all items associated with a particular passage, may overestimate the dependency…

  9. From Lévy to Brownian: a computational model based on biological fluctuation.

    PubMed

    Nurzaman, Surya G; Matsumoto, Yoshio; Nakamura, Yutaka; Shirai, Kazumichi; Koizumi, Satoshi; Ishiguro, Hiroshi

    2011-02-03

    Theoretical studies predict that Lévy walks maximizes the chance of encountering randomly distributed targets with a low density, but Brownian walks is favorable inside a patch of targets with high density. Recently, experimental data reports that some animals indeed show a Lévy and Brownian walk movement patterns when forage for foods in areas with low and high density. This paper presents a simple, Gaussian-noise utilizing computational model that can realize such behavior. We extend Lévy walks model of one of the simplest creature, Escherichia coli, based on biological fluctuation framework. We build a simulation of a simple, generic animal to observe whether Lévy or Brownian walks will be performed properly depends on the target density, and investigate the emergent behavior in a commonly faced patchy environment where the density alternates. Based on the model, animal behavior of choosing Lévy or Brownian walk movement patterns based on the target density is able to be generated, without changing the essence of the stochastic property in Escherichia coli physiological mechanism as explained by related researches. The emergent behavior and its benefits in a patchy environment are also discussed. The model provides a framework for further investigation on the role of internal noise in realizing adaptive and efficient foraging behavior.

  10. Density Driven Removal of Sediment from a Buoyant Muddy Plume

    NASA Astrophysics Data System (ADS)

    Rouhnia, M.; Strom, K.

    2014-12-01

    Experiments were conducted to study the effect of settling driven instabilities on sediment removal from hypopycnal plumes. Traditional approaches scale removal rates with particle settling velocity however, it has been suggested that the removal from buoyant suspensions happens at higher rates. The enhancement of removal is likely due to gravitational instabilities, such as fingering, at two-fluid interface. Previous studies have all sought to suppress flocculation, and no simple model exists to predict the removal rates under the effect of such instabilities. This study examines whether or not flocculation hampers instability formation and presents a simple removal rate model accounting for gravitational instabilities. A buoyant suspension of flocculated Kaolinite overlying a base of clear saltwater was investigated in a laboratory tank. Concentration was continuously measured in both layers with a pair of OBS sensors, and interface was monitored with digital cameras. Snapshots from the video were used to measure finger velocity. Samples of flocculated particles at the interface were extracted to retrieve floc size data using a floc camera. Flocculation did not stop creation of settling-driven fingers. A simple cylinder-based force balance model was capable of predicting finger velocity. Analogy of fingering process of fine grained suspensions to thermal plume formation and the concept of Grashof number enabled us to model finger spacing as a function of initial concentration. Finally, from geometry, the effective cross-sectional area was correlated to finger spacing. Reformulating the outward flux expression was done by substitution of finger velocity, rather than particle settling velocity, and finger area instead of total area. A box model along with the proposed outward flux was used to predict the SSC in buoyant layer. The model quantifies removal flux based on the initial SSC and is in good agreement with the experimental data.

  11. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  12. A study of stiffness, residual strength and fatigue life relationships for composite laminates

    NASA Technical Reports Server (NTRS)

    Ryder, J. T.; Crossman, F. W.

    1983-01-01

    Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.

  13. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  14. Grey-box modelling of aeration tank settling.

    PubMed

    Bechman, Henrik; Nielsen, Marinus K; Poulsen, Niels Kjølstad; Madsen, Henrik

    2002-04-01

    A model of the concentrations of suspended solids (SS) in the aeration tanks and in the effluent from these during Aeration tank settling (ATS) operation is established. The model is based on simple SS mass balances, a model of the sludge settling and a simple model of how the SS concentration in the effluent from the aeration tanks depends on the actual concentrations in the tanks and the sludge blanket depth. The model is formulated in continuous time by means of stochastic differential equations with discrete-time observations. The parameters of the model are estimated using a maximum likelihood method from data from an alternating BioDenipho waste water treatment plant (WWTP). The model is an important tool for analyzing ATS operation and for selecting the appropriate control actions during ATS, as the model can be used to predict the SS amounts in the aeration tanks as well as in the effluent from the aeration tanks.

  15. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  16. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  17. And So It Grows: Using a Computer-Based Simulation of a Population Growth Model to Integrate Biology & Mathematics

    ERIC Educational Resources Information Center

    Street, Garrett M.; Laubach, Timothy A.

    2013-01-01

    We provide a 5E structured-inquiry lesson so that students can learn more of the mathematics behind the logistic model of population biology. By using models and mathematics, students understand how population dynamics can be influenced by relatively simple changes in the environment.

  18. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    USDA-ARS?s Scientific Manuscript database

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  19. A simple theoretical model for ⁶³Ni betavoltaic battery.

    PubMed

    Zuo, Guoping; Zhou, Jianliang; Ke, Guotu

    2013-12-01

    A numerical simulation of the energy deposition distribution in semiconductors is performed for ⁶³Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for ⁶³Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to ⁶³Ni and ¹⁴⁷Pm betavoltaic batteries. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.

    2009-01-01

    Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.

  1. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  2. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE PAGES

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...

    2017-07-10

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  3. Dynamics of Zika virus outbreaks: an overview of mathematical modeling approaches.

    PubMed

    Wiratsudakul, Anuwat; Suparit, Parinya; Modchang, Charin

    2018-01-01

    The Zika virus was first discovered in 1947. It was neglected until a major outbreak occurred on Yap Island, Micronesia, in 2007. Teratogenic effects resulting in microcephaly in newborn infants is the greatest public health threat. In 2016, the Zika virus epidemic was declared as a Public Health Emergency of International Concern (PHEIC). Consequently, mathematical models were constructed to explicitly elucidate related transmission dynamics. In this review article, two steps of journal article searching were performed. First, we attempted to identify mathematical models previously applied to the study of vector-borne diseases using the search terms "dynamics," "mathematical model," "modeling," and "vector-borne" together with the names of vector-borne diseases including chikungunya, dengue, malaria, West Nile, and Zika. Then the identified types of model were further investigated. Second, we narrowed down our survey to focus on only Zika virus research. The terms we searched for were "compartmental," "spatial," "metapopulation," "network," "individual-based," "agent-based" AND "Zika." All relevant studies were included regardless of the year of publication. We have collected research articles that were published before August 2017 based on our search criteria. In this publication survey, we explored the Google Scholar and PubMed databases. We found five basic model architectures previously applied to vector-borne virus studies, particularly in Zika virus simulations. These include compartmental, spatial, metapopulation, network, and individual-based models. We found that Zika models carried out for early epidemics were mostly fit into compartmental structures and were less complicated compared to the more recent ones. Simple models are still commonly used for the timely assessment of epidemics. Nevertheless, due to the availability of large-scale real-world data and computational power, recently there has been growing interest in more complex modeling frameworks. Mathematical models are employed to explore and predict how an infectious disease spreads in the real world, evaluate the disease importation risk, and assess the effectiveness of intervention strategies. As the trends in modeling of infectious diseases have been shifting towards data-driven approaches, simple and complex models should be exploited differently. Simple models can be produced in a timely fashion to provide an estimation of the possible impacts. In contrast, complex models integrating real-world data require more time to develop but are far more realistic. The preparation of complicated modeling frameworks prior to the outbreaks is recommended, including the case of future Zika epidemic preparation.

  4. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  5. A remote sensing based vegetation classification logic for global land cover analysis

    USGS Publications Warehouse

    Running, Steven W.; Loveland, Thomas R.; Pierce, Lars L.; Nemani, R.R.; Hunt, E. Raymond

    1995-01-01

    This article proposes a simple new logic for classifying global vegetation. The critical features of this classification are that 1) it is based on simple, observable, unambiguous characteristics of vegetation structure that are important to ecosystem biogeochemistry and can be measured in the field for validation, 2) the structural characteristics are remotely sensible so that repeatable and efficient global reclassifications of existing vegetation will be possible, and 3) the defined vegetation classes directly translate into the biophysical parameters of interest by global climate and biogeochemical models. A first test of this logic for the continental United States is presented based on an existing 1 km AVHRR normalized difference vegetation index database. Procedures for solving critical remote sensing problems needed to implement the classification are discussed. Also, some inferences from this classification to advanced vegetation biophysical variables such as specific leaf area and photosynthetic capacity useful to global biogeochemical modeling are suggested.

  6. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Brown, Judith Alice

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  7. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE PAGES

    Bishop, Joseph E.; Brown, Judith Alice

    2018-06-15

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  8. How interactions between animal movement and landscape processes modify range dynamics and extinction risk

    EPA Science Inventory

    Range dynamics models now incorporate many of the mechanisms and interactions that drive species distributions. However, connectivity continues to be studied using overly simple distance-based dispersal models with little consideration of how the individual behavior of dispersin...

  9. PROPOSED MODELS FOR ESTIMATING RELEVANT DOSE RESULTING FROM EXPOSURES BY THE GASTROINTESTINAL ROUTE

    EPA Science Inventory

    Simple first-order intestinal absorption commonly used in physiologically-based pharmacokinetic(PBPK) models can be made to fit many clinical administrations but may not provide relevant information to extrapolate to real-world exposure scenarios for risk assessment. Small hydr...

  10. GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
    addressed with distributed models that can compute runoff and erosion at different spatial a...

  11. A Thermal-based Two-Source Energy Balance Model for Estimating Evapotranspiration over Complex Canopies

    USDA-ARS?s Scientific Manuscript database

    Land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation describes a robust but relatively simple LS...

  12. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  13. Modeling the growth and branching of plants: A simple rod-based model

    NASA Astrophysics Data System (ADS)

    Faruk Senan, Nur Adila; O'Reilly, Oliver M.; Tresierras, Timothy N.

    A rod-based model for plant growth and branching is developed in this paper. Specifically, Euler's theory of the elastica is modified to accommodate growth and remodeling. In addition, branching is characterized using a configuration force and evolution equations are postulated for the flexural stiffness and intrinsic curvature. The theory is illustrated with examples of multiple static equilibria of a branched plant and the remodeling and tip growth of a plant stem under gravitational loading.

  14. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  15. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  16. A simple algorithm to estimate the effective regional atmospheric parameters for thermal-inertia mapping

    USGS Publications Warehouse

    Watson, K.; Hummer-Miller, S.

    1981-01-01

    A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.

  17. Center for Parallel Optimization.

    DTIC Science & Technology

    1996-03-19

    A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.

  18. Capillarity Guided Patterning of Microliquids.

    PubMed

    Kang, Myeongwoo; Park, Woohyun; Na, Sangcheol; Paik, Sang-Min; Lee, Hyunjae; Park, Jae Woo; Kim, Ho-Young; Jeon, Noo Li

    2015-06-01

    Soft lithography and other techniques have been developed to investigate biological and chemical phenomena as an alternative to photolithography-based patterning methods that have compatibility problems. Here, a simple approach for nonlithographic patterning of liquids and gels inside microchannels is described. Using a design that incorporates strategically placed microstructures inside the channel, microliquids or gels can be spontaneously trapped and patterned when the channel is drained. The ability to form microscale patterns inside microfluidic channels using simple fluid drain motion offers many advantages. This method is geometrically analyzed based on hydrodynamics and verified with simulation and experiments. Various materials (i.e., water, hydrogels, and other liquids) are successfully patterned with complex shapes that are isolated from each other. Multiple cell types are patterned within the gels. Capillarity guided patterning (CGP) is fast, simple, and robust. It is not limited by pattern shape, size, cell type, and material. In a simple three-step process, a 3D cancer model that mimics cell-cell and cell-extracellular matrix interactions is engineered. The simplicity and robustness of the CGP will be attractive for developing novel in vitro models of organ-on-a-chip and other biological experimental platforms amenable to long-term observation of dynamic events using advanced imaging and analytical techniques. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Life extending control: An interdisciplinary engineering thrust

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Merrill, Walter C.

    1991-01-01

    The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.

  20. Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography

    PubMed Central

    Wang, Yu; Peng, Bo; Jiang, Jingfeng

    2017-01-01

    Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330

  1. Building an open-source simulation platform of acoustic radiation force-based breast elastography

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Peng, Bo; Jiang, Jingfeng

    2017-03-01

    Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.

  2. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  3. Comparing crop growth and carbon budgets simulated across AmeriFlux agricultural sites using the community land model (CLM)

    USDA-ARS?s Scientific Manuscript database

    Improving process-based crop models is needed to achieve high fidelity forecasts of regional energy, water, and carbon exchange. However, most state-of-the-art Land Surface Models (LSMs) assessed in the fifth phase of the Coupled Model Inter-comparison project (CMIP5) simulated crops as simple C3 or...

  4. Malaria transmission rates estimated from serological data.

    PubMed Central

    Burattini, M. N.; Massad, E.; Coutinho, F. A.

    1993-01-01

    A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011

  5. Development of mathematical models of environmental physiology

    NASA Technical Reports Server (NTRS)

    Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.

    1971-01-01

    Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.

  6. Generalized estimators of avian abundance from count survey data

    USGS Publications Warehouse

    Royle, J. Andrew

    2004-01-01

    I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.

  7. Using simple agent-based modeling to inform and enhance neighborhood walkability.

    PubMed

    Badland, Hannah; White, Marcus; Macaulay, Gus; Eagleson, Serryn; Mavoa, Suzanne; Pettit, Christopher; Giles-Corti, Billie

    2013-12-11

    Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).

  8. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  9. A feedback model of figure-ground assignment.

    PubMed

    Domijan, Drazen; Setić, Mia

    2008-05-30

    A computational model is proposed in order to explain how bottom-up and top-down signals are combined into a unified perception of figure and background. The model is based on the interaction between the ventral and the dorsal stream. The dorsal stream computes saliency based on boundary signals provided by the simple and the complex cortical cells. Output from the dorsal stream is projected to the surface network which serves as a blackboard on which the surface representation is formed. The surface network is a recurrent network which segregates different surfaces by assigning different firing rates to them. The figure is labeled by the maximal firing rate. Computer simulations showed that the model correctly assigns figural status to the surface with a smaller size, a greater contrast, convexity, surroundedness, horizontal-vertical orientation and a higher spatial frequency content. The simple gradient of activity in the dorsal stream enables the simulation of the new principles of the lower region and the top-bottom polarity. The model also explains how the exogenous attention and the endogenous attention may reverse the figural assignment. Due to the local excitation in the surface network, neural activity at the cued region will spread over the whole surface representation. Therefore, the model implements the object-based attentional selection.

  10. Analysis of tablet compaction. I. Characterization of mechanical behavior of powder and powder/tooling friction.

    PubMed

    Cunningham, J C; Sinka, I C; Zavaliangos, A

    2004-08-01

    In this first of two articles on the modeling of tablet compaction, the experimental inputs related to the constitutive model of the powder and the powder/tooling friction are determined. The continuum-based analysis of tableting makes use of an elasto-plastic model, which incorporates the elements of yield, plastic flow potential, and hardening, to describe the mechanical behavior of microcrystalline cellulose over the range of densities experienced during tableting. Specifically, a modified Drucker-Prager/cap plasticity model, which includes material parameters such as cohesion, internal friction, and hydrostatic yield pressure that evolve with the internal state variable relative density, was applied. Linear elasticity is assumed with the elastic parameters, Young's modulus, and Poisson's ratio dependent on the relative density. The calibration techniques were developed based on a series of simple mechanical tests including diametrical compression, simple compression, and die compaction using an instrumented die. The friction behavior is measured using an instrumented die and the experimental data are analyzed using the method of differential slices. The constitutive model and frictional properties are essential experimental inputs to the finite element-based model described in the companion article. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2022-2039, 2004

  11. A simple microviscometric approach based on Brownian motion tracking.

    PubMed

    Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan

    2015-02-01

    Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).

  12. A minimally sufficient model for rib proximal-distal patterning based on genetic analysis and agent-based simulations

    PubMed Central

    Mah, In Kyoung

    2017-01-01

    For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314

  13. The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.

    ERIC Educational Resources Information Center

    Gilbert, R.

    1979-01-01

    Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)

  14. Diffusion of Super-Gaussian Profiles

    ERIC Educational Resources Information Center

    Rosenberg, C.-J.; Anderson, D.; Desaix, M.; Johannisson, P.; Lisak, M.

    2007-01-01

    The present analysis describes an analytically simple and systematic approximation procedure for modelling the free diffusive spreading of initially super-Gaussian profiles. The approach is based on a self-similar ansatz for the evolution of the diffusion profile, and the parameter functions involved in the modelling are determined by suitable…

  15. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments

    USDA-ARS?s Scientific Manuscript database

    Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...

  16. Modeling the Hydrogen Bond within Molecular Dynamics

    ERIC Educational Resources Information Center

    Lykos, Peter

    2004-01-01

    The structure of a hydrogen bond is elucidated within the framework of molecular dynamics based on the model of Rahman and Stillinger (R-S) liquid water treatment. Thus, undergraduates are exposed to the powerful but simple use of classical mechanics to solid objects from a molecular viewpoint.

  17. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  18. Application of geometry based hysteresis modelling in compensation of hysteresis of piezo bender actuator

    NASA Astrophysics Data System (ADS)

    Milecki, Andrzej; Pelic, Marcin

    2016-10-01

    This paper presents results of studies of an application of a new method of piezo bender actuators modelling. A special hysteresis simulation model was developed and is presented. The model is based on a geometrical deformation of main hysteresis loop. The piezoelectric effect is described and the history of the hysteresis modelling is briefly reviewed. Firstly, a simple model for main loop modelling is proposed. Then, a geometrical description of the non-saturated hysteresis is presented and its modelling method is introduced. The modelling makes use of the function describing the geometrical shape of the two hysteresis main curves, which can be defined theoretically or obtained by measurement. These main curves are stored in the memory and transformed geometrically in order to obtain the minor curves. Such model was prepared in the Matlab-Simulink software, but can be easily implemented using any programming language and applied in an on-line controller. In comparison to the other known simulation methods, the one presented in the paper is easy to understand, and uses simple arithmetical equations, allowing to quickly obtain the inversed model of hysteresis. The inversed model was further used for compensation of a non-saturated hysteresis of the piezo bender actuator and results have also been presented in the paper.

  19. System-level modeling of acetone-butanol-ethanol fermentation.

    PubMed

    Liao, Chen; Seo, Seung-Oh; Lu, Ting

    2016-05-01

    Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. On the context-dependent scaling of consumer feeding rates.

    PubMed

    Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C

    2016-06-01

    The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  1. Spatial Pattern of Cell Damage in Tissue from Heavy Ions

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem L.; Huff, Janice L.; Cucinotta, Francis A.

    2007-01-01

    A new Monte Carlo algorithm was developed that can model passage of heavy ions in a tissue, and their action on the cellular matrix for 2- or 3-dimensional cases. The build-up of secondaries such as projectile fragments, target fragments, other light fragments, and delta-rays was simulated. Cells were modeled as a cell culture monolayer in one example, where the data were taken directly from microscopy (2-d cell matrix). A simple model of tissue was given as abstract spheres with close approximation to real cell geometries (3-d cell matrix), as well as a realistic model of tissue was proposed based on microscopy images. Image segmentation was used to identify cells in an irradiated cell culture monolayer, or slices of tissue. The cells were then inserted into the model box pixel by pixel. In the case of cell monolayers (2-d), the image size may exceed the modeled box size. Such image was is moved with respect to the box in order to sample as many cells as possible. In the case of the simple tissue (3-d), the tissue box is modeled with periodic boundary conditions, which extrapolate the technique to macroscopic volumes of tissue. For real tissue, specific spatial patterns for cell apoptosis and necrosis are expected. The cell patterns were modeled based on action cross sections for apoptosis and necrosis estimated based on BNL data, and other experimental data.

  2. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  3. Landscape scale mapping of forest inventory data by nearest neighbor classification

    Treesearch

    Andrew Lister

    2009-01-01

    One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...

  4. Aerothermal modeling program, phase 1

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.

    1983-01-01

    Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.

  5. Task effects, performance levels, features, configurations, and holistic face processing: A reply to Rossion

    PubMed Central

    Riesenhuber, Maximilian; Wolff, Brian S.

    2009-01-01

    Summary A recent article in Acta Psychologica (“Picture-plane inversion leads to qualitative changes of face perception” by B. Rossion, 2008) criticized several aspects of an earlier paper of ours (Riesenhuber et al., “Face processing in humans is compatible with a simple shape-based model of vision”, Proc Biol Sci, 2004). We here address Rossion’s criticisms and correct some misunderstandings. To frame the discussion, we first review our previously presented computational model of face recognition in cortex (Jiang et al., “Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques”, Neuron, 2006) that provides a concrete biologically plausible computational substrate for holistic coding, namely a neural representation learned for upright faces, in the spirit of the original simple-to-complex hierarchical model of vision by Hubel and Wiesel. We show that Rossion’s and others’ data support the model, and that there is actually a convergence of views on the mechanisms underlying face recognition, in particular regarding holistic processing. PMID:19665104

  6. Prospects for Alpha Particle Heating in JET in the Hot Ion Regime

    NASA Astrophysics Data System (ADS)

    Cordey, J. G.; Keilhacker, M.; Watkins, M. L.

    1987-01-01

    The prospects for alpha particle heating in JET are discussed. A computational model is developed to represent adequately the neutron yield from JET plasmas heated by neutral beam injection. This neutral beam model, augmented by a simple plasma model, is then used to determine the neutron yields and fusion Q-values anticipated for different heating schemes in future operation of JET with tritium. The relative importance of beam-thermal and thermal-thermal reactions is pointed out and the dependence of the results on, for example, plasma density, temperature, energy confinement and purity is shown. Full 1½-D transport code calculations, based on models developed for ohmic, ICRF and NBI heated JET discharges, are used also to provide a power scan for JET operation in tritium in the low density, high ion temperature regime. The results are shown to be in good agreement with the estimates made using the simple plasma model and indicate that, based on present knowledge, a fusion Q-value in the plasma centre above unity should be achieved in JET.

  7. Webcam camera as a detector for a simple lab-on-chip time based approach.

    PubMed

    Wongwilai, Wasin; Lapanantnoppakhun, Somchai; Grudpan, Supara; Grudpan, Kate

    2010-05-15

    A modification of a webcam camera for use as a small and low cost detector was demonstrated with a simple lab-on-chip reactor. Real time continuous monitoring of the reaction zone could be done. Acid-base neutralization with phenolphthalein indicator was used as a model reaction. The fading of pink color of the indicator when the acidic solution diffused into the basic solution zone was recorded as the change of red, blue and green colors (%RBG.) The change was related to acid concentration. A low cost portable semi-automation analysis system was achieved.

  8. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  9. Multiwell capillarity-based microfluidic device for the study of 3D tumour tissue-2D endothelium interactions and drug screening in co-culture models.

    PubMed

    Virumbrales-Muñoz, María; Ayuso, José María; Olave, Marta; Monge, Rosa; de Miguel, Diego; Martínez-Lostao, Luis; Le Gac, Séverine; Doblare, Manuel; Ochoa, Ignacio; Fernandez, Luis J

    2017-09-20

    The tumour microenvironment is very complex, and essential in tumour development and drug resistance. The endothelium is critical in the tumour microenvironment: it provides nutrients and oxygen to the tumour and is essential for systemic drug delivery. Therefore, we report a simple, user-friendly microfluidic device for co-culture of a 3D breast tumour model and a 2D endothelium model for cross-talk and drug delivery studies. First, we demonstrated the endothelium was functional, whereas the tumour model exhibited in vivo features, e.g., oxygen gradients and preferential proliferation of cells with better access to nutrients and oxygen. Next, we observed the endothelium structure lost its integrity in the co-culture. Following this, we evaluated two drug formulations of TRAIL (TNF-related apoptosis inducing ligand): soluble and anchored to a LUV (large unilamellar vesicle). Both diffused through the endothelium, LUV-TRAIL being more efficient in killing tumour cells, showing no effect on the integrity of endothelium. Overall, we have developed a simple capillary force-based microfluidic device for 2D and 3D cell co-cultures. Our device allows high-throughput approaches, patterning different cell types and generating gradients without specialised equipment. We anticipate this microfluidic device will facilitate drug screening in a relevant microenvironment thanks to its simple, effective and user-friendly operation.

  10. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Thorough specification of the neurophysiologic processes underlying behavior and of their manifestation in EEG - demonstration with the go/no-go task.

    PubMed

    Shahaf, Goded; Pratt, Hillel

    2013-01-01

    In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.

  12. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  13. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks

    PubMed Central

    Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.

    2011-01-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616

  14. Study of simple land battles using agent-based modeling: Strategy and emergent phenomena

    NASA Astrophysics Data System (ADS)

    Westley, Alexandra; de Meglio, Nicholas; Hager, Rebecca; Mok, Jorge Wu; Shanahan, Linda; Sen, Surajit

    2017-04-01

    In this paper, we expand upon our recent studies of an agent-based model of a battle between an intelligent army and an insurgent army to explore the role of modifying strategy according to the state of the battle (adaptive strategy) on battle outcomes. This model leads to surprising complexity and rich possibilities in battle outcomes, especially in battles between two well-matched sides. We contend that the use of adaptive strategies may be effective in winning battles.

  15. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  16. Exponential growth kinetics for Polyporus versicolor and Pleurotus ostreatus in submerged culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroad, P.A.; Wilke, C.R.

    1977-04-01

    Simple mathematical models for a batch culture of pellet-forming fungi in submerged culture were tested on growth data for Polyporus versicolor (ATCC 12679) and Pleurotus ostreatus (ATCC 9415). A kinetic model based on a growth rate proportional to the two-thirds power of the cell mass was shown to be satisfactory. A model based on a growth rate directly proportional to the cell mass fitted the data equally well, however, and may be preferable because of mathematical simplicity.

  17. Development and Testing of a Simple Calibration Technique for Long-Term Hydrological Impact Assessment (L-THIA) Model

    NASA Astrophysics Data System (ADS)

    Muthukrishnan, S.; Harbor, J.

    2001-12-01

    Hydrological studies are significant part of every engineering, developmental project and geological studies done to assess and understand the interactions between the hydrology and the environment. Such studies are generally conducted before the beginning of the project as well as after the project is completed, such that a comprehensive analysis can be done on the impact of such projects on the local and regional hydrology of the area. A good understanding of the chain of relationships that form the hydro-eco-biological and environmental cycle can be of immense help in maintaining the natural balance as we work towards exploration and exploitation of the natural resources as well as urbanization of undeveloped land. Rainfall-Runoff modeling techniques have been of great use here for decades since they provide fast and efficient means of analyzing vast amount of data that is gathered. Though process based, detailed models are better than the simple models, the later ones are used more often due to their simplicity, ease of use, and easy availability of data needed to run them. The Curve Number (CN) method developed by the United States Department of Agriculture (USDA) is one of the most widely used hydrologic modeling tools in the US, and has earned worldwide acceptance as a practical method for evaluating the effects of land use changes on the hydrology of an area. The Long-Term Hydrological Impact Assessment (L-THIA) model is a basic, CN-based, user-oriented model that has gained popularity amongst watershed planners because of its reliance on readily available data, and because the model is easy to use (http://www.ecn.purdue.edu/runoff) and produces results geared to the general information needs of planners. The L-THIA model was initially developed to study the relative long-term hydrologic impacts of different land use (past/current/future) scenarios, and it has been successful in meeting this goal. However, one of the weaknesses of L-THIA, as well as other models that focus strictly on surface runoff, is that many users are interested in predictions of runoff that match observations of flow in streams and rivers. To make L-THIA more useful for the planners and engineers alike, a simple, long-term calibration method based on linear regression of L-THIA predicted and observed surface runoff has been developed and tested here. The results from Little Eagle Creek (LEC) in Indiana show that such calibrations are successful and valuable. This method can be used to calibrate other simple rainfall-runoff models too.

  18. Predicting forest dieback in Maine, USA: a simple model based on soil frost and drought

    Treesearch

    Allan N.D. Auclair; Warren E. Heilman; Blondel Brinkman

    2010-01-01

    Tree roots of northern hardwoods are shallow rooted, winter active, and minimally frost hardened; dieback is a winter freezing injury to roots incited by frost penetration in the absence of adequate snow cover and exacerbated by drought in summer. High soil water content greatly increases conductivity of frost. We develop a model based on the sum of z-scores of soil...

  19. Data-driven outbreak forecasting with a simple nonlinear growth model

    PubMed Central

    Lega, Joceline; Brown, Heidi E.

    2016-01-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. PMID:27770752

  20. Factor Analysis for Clustered Observations.

    ERIC Educational Resources Information Center

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  1. The attentional drift-diffusion model extends to simple purchasing decisions.

    PubMed

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.

  2. The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions

    PubMed Central

    Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio

    2012-01-01

    How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945

  3. Simple-1: Development stage of the data transmission system for a solid propellant mid-power rocket model

    NASA Astrophysics Data System (ADS)

    Yarce, Andrés; Sebastián Rodríguez, Juan; Galvez, Julián; Gómez, Alejandro; García, Manuel J.

    2017-06-01

    This paper presents the development stage of a communication module for a solid propellant mid-power rocket model. The communication module was named. Simple-1 and this work considers its design, construction and testing. A rocket model Estes Ventris Series Pro II® was modified to introduce, on the top of the payload, several sensors in a CanSat form factor. The Printed Circuit Board (PCB) was designed and fabricated from Commercial Off The Shelf (COTS) components and assembled in a cylindrical rack structure similar to this small format satellite concept. The sensors data was processed using one Arduino Mini and transmitted using a radio module to a Software Defined Radio (SDR) HackRF based platform on the ground station. The Simple-1 was tested using a drone in successive releases, reaching altitudes from 200 to 300 meters. Different kind of data, in terms of altitude, position, atmospheric pressure and vehicle temperature were successfully measured, making possible the progress to a next stage of launching and analysis.

  4. Soil mechanics: breaking ground.

    PubMed

    Einav, Itai

    2007-12-15

    In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.

  5. Toward micro-scale spatial modeling of gentrification

    NASA Astrophysics Data System (ADS)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  6. A simple model of circadian rhythms based on dimerization and proteolysis of PER and TIM

    PubMed Central

    Tyson, JJ; Hong, CI; Thron, CD; Novak, B

    1999-01-01

    Many organisms display rhythms of physiology and behavior that are entrained to the 24-h cycle of light and darkness prevailing on Earth. Under constant conditions of illumination and temperature, these internal biological rhythms persist with a period close to 1 day ("circadian"), but it is usually not exactly 24 h. Recent discoveries have uncovered stunning similarities among the molecular circuitries of circadian clocks in mice, fruit flies, and bread molds. A consensus picture is coming into focus around two proteins (called PER and TIM in fruit flies), which dimerize and then inhibit transcription of their own genes. Although this picture seems to confirm a venerable model of circadian rhythms based on time-delayed negative feedback, we suggest that just as crucial to the circadian oscillator is a positive feedback loop based on stabilization of PER upon dimerization. These ideas can be expressed in simple mathematical form (phase plane portraits), and the model accounts naturally for several hallmarks of circadian rhythms, including temperature compensation and the per(L) mutant phenotype. In addition, the model suggests how an endogenous circadian oscillator could have evolved from a more primitive, light-activated switch. PMID:20540926

  7. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  8. Multicomponent ensemble models to forecast induced seismicity

    NASA Astrophysics Data System (ADS)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.

  9. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  10. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  11. A measurement-based performability model for a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  12. Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method

    NASA Astrophysics Data System (ADS)

    Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.

    2017-08-01

    Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.

  13. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  14. Route Prediction on Tracking Data to Location-Based Services

    NASA Astrophysics Data System (ADS)

    Petróczi, Attila István; Gáspár-Papanek, Csaba

    Wireless networks have become so widespread, it is beneficial to determine the ability of cellular networks for localization. This property enables the development of location-based services, providing useful information. These services can be improved by route prediction under the condition of using simple algorithms, because of the limited capabilities of mobile stations. This study gives alternative solutions for this problem of route prediction based on a specific graph model. Our models provide the opportunity to reach our destinations with less effort.

  15. Nonlinear Modeling by Assembling Piecewise Linear Models

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  16. Development of a bi-equilibrium model for biomass gasification in a downdraft bed reactor.

    PubMed

    Biagini, Enrico; Barontini, Federica; Tognotti, Leonardo

    2016-02-01

    This work proposes a simple and accurate tool for predicting the main parameters of biomass gasification (syngas composition, heating value, flow rate), suitable for process study and system analysis. A multizonal model based on non-stoichiometric equilibrium models and a repartition factor, simulating the bypass of pyrolysis products through the oxidant zone, was developed. The results of tests with different feedstocks (corn cobs, wood pellets, rice husks and vine pruning) in a demonstrative downdraft gasifier (350kW) were used for validation. The average discrepancy between model and experimental results was up to 8 times less than the one with the simple equilibrium model. The repartition factor was successfully related to the operating conditions and characteristics of the biomass to simulate different conditions of the gasifier (variation in potentiality, densification and mixing of feedstock) and analyze the model sensitivity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Water's hydrogen bonds in the hydrophobic effect: a simple model.

    PubMed

    Xu, Huafeng; Dill, Ken A

    2005-12-15

    We propose a simple analytical model to account for water's hydrogen bonds in the hydrophobic effect. It is based on computing a mean-field partition function for a water molecule in the first solvation shell around a solute molecule. The model treats the orientational restrictions from hydrogen bonding, and utilizes quantities that can be obtained from bulk water simulations. We illustrate the principles in a 2-dimensional Mercedes-Benz-like model. Our model gives good predictions for the heat capacity of hydrophobic solvation, reproduces the solvation energies and entropies at different temperatures with only one fitting parameter, and accounts for the solute size dependence of the hydrophobic effect. Our model supports the view that water's hydrogen bonding propensity determines the temperature dependence of the hydrophobic effect. It explains the puzzling experimental observation that dissolving a nonpolar solute in hot water has positive entropy.

  18. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    NASA Astrophysics Data System (ADS)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  19. Modeling Simple Driving Tasks with a One-Boundary Diffusion Model

    PubMed Central

    Ratcliff, Roger; Strayer, David

    2014-01-01

    A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620

  20. From Lévy to Brownian: A Computational Model Based on Biological Fluctuation

    PubMed Central

    Nurzaman, Surya G.; Matsumoto, Yoshio; Nakamura, Yutaka; Shirai, Kazumichi; Koizumi, Satoshi; Ishiguro, Hiroshi

    2011-01-01

    Background Theoretical studies predict that Lévy walks maximizes the chance of encountering randomly distributed targets with a low density, but Brownian walks is favorable inside a patch of targets with high density. Recently, experimental data reports that some animals indeed show a Lévy and Brownian walk movement patterns when forage for foods in areas with low and high density. This paper presents a simple, Gaussian-noise utilizing computational model that can realize such behavior. Methodology/Principal Findings We extend Lévy walks model of one of the simplest creature, Escherichia coli, based on biological fluctuation framework. We build a simulation of a simple, generic animal to observe whether Lévy or Brownian walks will be performed properly depends on the target density, and investigate the emergent behavior in a commonly faced patchy environment where the density alternates. Conclusions/Significance Based on the model, animal behavior of choosing Lévy or Brownian walk movement patterns based on the target density is able to be generated, without changing the essence of the stochastic property in Escherichia coli physiological mechanism as explained by related researches. The emergent behavior and its benefits in a patchy environment are also discussed. The model provides a framework for further investigation on the role of internal noise in realizing adaptive and efficient foraging behavior. PMID:21304911

  1. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  2. Word of Mouth : An Agent-based Approach to Predictability of Stock Prices

    NASA Astrophysics Data System (ADS)

    Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko

    This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.

  3. Rapid modeling of complex multi-fault ruptures with simplistic models from real-time GPS: Perspectives from the 2016 Mw 7.8 Kaikoura earthquake

    NASA Astrophysics Data System (ADS)

    Crowell, B.; Melgar, D.

    2017-12-01

    The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.

  4. Figure-Ground Segmentation Using Factor Graphs

    PubMed Central

    Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr

    2009-01-01

    Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation. We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach. PMID:20160994

  5. Regionalization of response routine parameters

    NASA Astrophysics Data System (ADS)

    Tøfte, Lena S.; Sultan, Yisak A.

    2013-04-01

    When area distributed hydrological models are to be calibrated or updated, fewer calibration parameters is of a considerable advantage. Based on, among others, Kirchner, we have developed a simple non-threshold response model for drainage in natural catchments, to be used in the gridded hydrological model ENKI. The new response model takes only the hydrogram into account, it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. The method is based on the assumption that in catchments where precipitation, evaporation and snowmelt is neglect able, the discharge is entirely determined by the amount of stored water. It can then be characterized as a simple first-order nonlinear dynamical system, where the governing equations can be found directly from measured stream flow fluctuations. This means that the response in the catchment can be modelled by using hydrogram data where all data from periods with rain, snowmelt or evaporation is left out, and adjust these series to a two or three parameter equation. A large number of discharge series from catchments in different regions in Norway are analyzed, and parameters found for all the series. By combining the computed parameters and known catchments characteristics, we try to regionalize the parameters. Then the parameters in the response routine can easily be found also for ungauged catchments, from maps or data bases.

  6. Monte Carlo simulations of lattice models for single polymer systems

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-10-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.

  7. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  8. Nuclear Quadrupole Moments and Nuclear Shell Structure

    DOE R&D Accomplishments Database

    Townes, C. H.; Foley, H. M.; Low, W.

    1950-06-23

    Describes a simple model, based on nuclear shell considerations, which leads to the proper behavior of known nuclear quadrupole moments, although predictions of the magnitudes of some quadrupole moments are seriously in error.

  9. Simple potential model for interaction of dark particles with massive bodies

    NASA Astrophysics Data System (ADS)

    Takibayev, Nurgali

    2018-01-01

    A simple model for interaction of dark particles with matter based on resonance behavior in a three-body system is proposed. The model describes resonant amplification of effective interaction between two massive bodies at large distances between them. The phenomenon is explained by catalytic action of dark particles rescattering at a system of two heavy bodies which are understood here as the big stellar objects. Resonant amplification of the effective interaction between the two heavy bodies imitates the increase in their mass while their true gravitational mass remains unchanged. Such increased interaction leads to more pronounced gravitational lensing of bypassing light. It is shown that effective interaction between the heavy bodies is changed at larger distances and can transform into repulsive action.

  10. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  11. Explaining Moral Behavior.

    PubMed

    Osman, Magda; Wiegmann, Alex

    2017-03-01

    In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

  12. Modeling Impact of Urbanization in US Cities Using Simple Biosphere Model SiB2

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert

    2016-01-01

    We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products, as well as climate drivers from Phase 2 of the North American Land Data Assimilation System (NLDAS-2) in a Simple Biosphere land surface model (SiB2) to assess the impact of urbanization in continental USA (excluding Alaska and Hawaii). More than 300 cities and their surrounding suburban and rural areas are defined in this study to characterize the impact of urbanization on surface climate including surface energy, carbon budget, and water balance. These analyses reveal an uneven impact of urbanization across the continent that should inform upon policy options for improving urban growth including heat mitigation and energy use, carbon sequestration and flood prevention.

  13. A simple analytical model for dynamics of time-varying target leverage ratios

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Hui, C. H.

    2012-03-01

    In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.

  14. Language Management in the Czech Republic

    ERIC Educational Resources Information Center

    Neustupny, J. V.; Nekvapil, Jiri

    2003-01-01

    This monograph, based on the Language Management model, provides information on both the "simple" (discourse-based) and "organised" modes of attention to language problems in the Czech Republic. This includes but is not limited to the language policy of the State. This approach does not satisfy itself with discussing problems…

  15. Simultaneous pre-concentration and separation on simple paper-based analytical device for protein analysis.

    PubMed

    Niu, Ji-Cheng; Zhou, Ting; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Yang, Fu-Quan; Wu, Zhi-Yong

    2018-02-01

    In this work, fast isoelectric focusing (IEF) was successfully implemented on an open paper fluidic channel for simultaneous concentration and separation of proteins from complex matrix. With this simple device, IEF can be finished in 10 min with a resolution of 0.03 pH units and concentration factor of 10, as estimated by color model proteins by smartphone-based colorimetric detection. Fast detection of albumin from human serum and glycated hemoglobin (HBA1c) from blood cell was demonstrated. In addition, off-line identification of the model proteins from the IEF fractions with matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was also shown. This PAD IEF is potentially useful either for point of care test (POCT) or biomarker analysis as a cost-effective sample pretreatment method.

  16. Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaffner, Christian

    2010-09-15

    We present simple protocols for oblivious transfer and password-based identification which are secure against general attacks in the noisy-quantum-storage model as defined in R. Koenig, S. Wehner, and J. Wullschleger [e-print arXiv:0906.1030]. We argue that a technical tool from Koenig et al. suffices to prove security of the known protocols. Whereas the more involved protocol for oblivious transfer from Koenig et al. requires less noise in storage to achieve security, our ''canonical'' protocols have the advantage of being simpler to implement and the security error is easier control. Therefore, our protocols yield higher OT rates for many realistic noise parameters.more » Furthermore, a proof of security of a direct protocol for password-based identification against general noisy-quantum-storage attacks is given.« less

  17. Deciphering mRNA Sequence Determinants of Protein Production Rate

    NASA Astrophysics Data System (ADS)

    Szavits-Nossan, Juraj; Ciandrini, Luca; Romano, M. Carmen

    2018-03-01

    One of the greatest challenges in biophysical models of translation is to identify coding sequence features that affect the rate of translation and therefore the overall protein production in the cell. We propose an analytic method to solve a translation model based on the inhomogeneous totally asymmetric simple exclusion process, which allows us to unveil simple design principles of nucleotide sequences determining protein production rates. Our solution shows an excellent agreement when compared to numerical genome-wide simulations of S. cerevisiae transcript sequences and predicts that the first 10 codons, which is the ribosome footprint length on the mRNA, together with the value of the initiation rate, are the main determinants of protein production rate under physiological conditions. Finally, we interpret the obtained analytic results based on the evolutionary role of the codons' choice for regulating translation rates and ribosome densities.

  18. A polarizable dipole-dipole interaction model for evaluation of the interaction energies for N-H···O=C and C-H···O=C hydrogen-bonded complexes.

    PubMed

    Li, Shu-Shi; Huang, Cui-Ying; Hao, Jiao-Jiao; Wang, Chang-Sheng

    2014-03-05

    In this article, a polarizable dipole-dipole interaction model is established to estimate the equilibrium hydrogen bond distances and the interaction energies for hydrogen-bonded complexes containing peptide amides and nucleic acid bases. We regard the chemical bonds N-H, C=O, and C-H as bond dipoles. The magnitude of the bond dipole moment varies according to its environment. We apply this polarizable dipole-dipole interaction model to a series of hydrogen-bonded complexes containing the N-H···O=C and C-H···O=C hydrogen bonds, such as simple amide-amide dimers, base-base dimers, peptide-base dimers, and β-sheet models. We find that a simple two-term function, only containing the permanent dipole-dipole interactions and the van der Waals interactions, can produce the equilibrium hydrogen bond distances compared favorably with those produced by the MP2/6-31G(d) method, whereas the high-quality counterpoise-corrected (CP-corrected) MP2/aug-cc-pVTZ interaction energies for the hydrogen-bonded complexes can be well-reproduced by a four-term function which involves the permanent dipole-dipole interactions, the van der Waals interactions, the polarization contributions, and a corrected term. Based on the calculation results obtained from this polarizable dipole-dipole interaction model, the natures of the hydrogen bonding interactions in these hydrogen-bonded complexes are further discussed. Copyright © 2013 Wiley Periodicals, Inc.

  19. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  20. Performability modeling based on real data: A casestudy

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1987-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.

  1. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  2. V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening.

    PubMed

    Teich, Andrew F; Qian, Ning

    2010-03-01

    Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.

  3. Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.

    PubMed

    Stierstorfer, Karl

    2018-01-01

    To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.

  4. Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train

    NASA Astrophysics Data System (ADS)

    Sytov, E. S.; Bratus, A. S.; Yurchenko, D.

    2018-04-01

    This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.

  5. Coupled electromagnetic-thermodynamic simulations of microwave heating problems using the FDTD algorithm.

    PubMed

    Kopyt, Paweł; Celuch, Małgorzata

    2007-01-01

    A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.

  6. Electric Conduction in Semiconductors: A Pedagogical Model Based on the Monte Carlo Method

    ERIC Educational Resources Information Center

    Capizzo, M. C.; Sperandeo-Mineo, R. M.; Zarcone, M.

    2008-01-01

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier…

  7. Development of a Landforms Model for Puerto Rico and its Application for Land Cover Change Analysis

    Treesearch

    Sebastian Martinuzzi; William A. Gould; Olga M. Ramos Gonzalez; Brook E. Edwards

    2007-01-01

    Comprehensive analysis of land morphology is essential to supporting a wide range environmental studies. We developed a landforms model that identifies eleven landform units for Puerto Rico based on parameters of land position and slope. The model is capable of extracting operational information in a simple way and is adaptable to different environments and objectives...

  8. Modeling of breath methane concentration profiles during exercise on an ergometer*

    PubMed Central

    Szabó, Anna; Unterkofler, Karl; Mochalski, Pawel; Jandacka, Martin; Ruzsanyi, Vera; Szabó, Gábor; Mohácsi, Árpád; Teschl, Susanne; Teschl, Gerald; King, Julian

    2016-01-01

    We develop a simple three compartment model based on mass balance equations which quantitatively describes the dynamics of breath methane concentration profiles during exercise on an ergometer. With the help of this model it is possible to estimate the endogenous production rate of methane in the large intestine by measuring breath gas concentrations of methane. PMID:26828421

  9. BehavePlus fire modeling system, version 5.0: Design and Features

    Treesearch

    Faith Ann Heinsch; Patricia L. Andrews

    2010-01-01

    The BehavePlus fire modeling system is a computer program that is based on mathematical models that describe wildland fire behavior and effects and the fire environment. It is a flexible system that produces tables, graphs, and simple diagrams. It can be used for a host of fire management applications, including projecting the behavior of an ongoing fire, planning...

  10. Calculating the surface tension of binary solutions of simple fluids of comparable size

    NASA Astrophysics Data System (ADS)

    Zaitseva, E. S.; Tovbin, Yu. K.

    2017-11-01

    A molecular theory based on the lattice gas model (LGM) is used to calculate the surface tension of one- and two-component planar vapor-liquid interfaces of simple fluids. Interaction between nearest neighbors is considered in the calculations. LGM is applied as a tool of interpolation: the parameters of the model are corrected using experimental surface tension data. It is found that the average accuracy of describing the surface tension of pure substances (Ar, N2, O2, CH4) and their mixtures (Ar-O2, Ar-N2, Ar-CH4, N2-CH4) does not exceed 2%.

  11. A simple hydrodynamic model of tornado-like vortices

    NASA Astrophysics Data System (ADS)

    Kurgansky, M. V.

    2015-05-01

    Based on similarity arguments, a simple fluid dynamic model of tornado-like vortices is offered that, with account for "vortex breakdown" at a certain height above the ground, relates the maximal azimuthal velocity in the vortex, reachable near the ground surface, to the convective available potential energy (CAPE) stored in the environmental atmosphere under pre-tornado conditions. The relative proportion of the helicity (kinetic energy) destruction (dissipation) in the "vortex breakdown" zone and, accordingly, within the surface boundary layer beneath the vortex is evaluated. These considerations form the basis of the dynamic-statistical analysis of the relationship between the tornado intensity and the CAPE budget in the surrounding atmosphere.

  12. Cosmic microwave background radiation anisotropies in brane worlds.

    PubMed

    Koyama, Kazuya

    2003-11-28

    We propose a new formulation to calculate the cosmic microwave background (CMB) spectrum in the Randall-Sundrum two-brane model based on recent progress in solving the bulk geometry using a low energy approximation. The evolution of the anisotropic stress imprinted on the brane by the 5D Weyl tensor is calculated. An impact of the dark radiation perturbation on the CMB spectrum is investigated in a simple model assuming an initially scale-invariant adiabatic perturbation. The dark radiation perturbation induces isocurvature perturbations, but the resultant spectrum can be quite different from the prediction of simple mixtures of adiabatic and isocurvature perturbations due to Weyl anisotropic stress.

  13. Fitting Neuron Models to Spike Trains

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925

  14. State-and-transition prototype model of riparian vegetation downstream of Glen Canyon Dam, Arizona

    USGS Publications Warehouse

    Ralston, Barbara E.; Starfield, Anthony M.; Black, Ronald S.; Van Lonkhuyzen, Robert A.

    2014-01-01

    Facing an altered riparian plant community dominated by nonnative species, resource managers are increasingly interested in understanding how to manage and promote healthy riparian habitats in which native species dominate. For regulated rivers, managing flows is one tool resource managers consider to achieve these goals. Among many factors that can influence riparian community composition, hydrology is a primary forcing variable. Frame-based models, used successfully in grassland systems, provide an opportunity for stakeholders concerned with riparian systems to evaluate potential riparian vegetation responses to alternative flows. Frame-based, state-and-transition models of riparian vegetation for reattachment bars, separation bars, and the channel margin found on the Colorado River downstream of Glen Canyon Dam were constructed using information from the literature. Frame-based models can be simple spreadsheet models (created in Microsoft® Excel) or developed further with programming languages (for example, C-sharp). The models described here include seven community states and five dam operations that cause transitions between states. Each model divides operations into growing (April–September) and non-growing seasons (October–March) and incorporates upper and lower bar models, using stage elevation as a division. The inputs (operations) can be used by stakeholders to evaluate flows that may promote dynamic riparian vegetation states, or identify those flow options that may promote less desirable states (for example, Tamarisk [Tamarix sp.] temporarily flooded shrubland). This prototype model, although simple, can still elicit discussion about operational options and vegetation response.

  15. Performance Considerations for the SIMPL Single Photon, Polarimetric, Two-Color Laser Altimeter as Applied to Measurements of Forest Canopy Structure and Composition

    NASA Technical Reports Server (NTRS)

    Dabney, Philip W.; Harding, David J.; Valett, Susan R.; Vasilyev, Aleksey A.; Yu, Anthony W.

    2012-01-01

    The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is a multi-beam, micropulse airborne laser altimeter that acquires active and passive polarimetric optical remote sensing measurements at visible and near-infrared wavelengths. SIMPL was developed to demonstrate advanced measurement approaches of potential benefit for improved, more efficient spaceflight laser altimeter missions. SIMPL data have been acquired for wide diversity of forest types in the summers of 2010 and 2011 in order to assess the potential of its novel capabilities for characterization of vegetation structure and composition. On each of its four beams SIMPL provides highly-resolved measurements of forest canopy structure by detecting single-photons with 15 cm ranging precision using a narrow-beam system operating at a laser repetition rate of 11 kHz. Associated with that ranging data SIMPL provides eight amplitude parameters per beam unlike the single amplitude provided by typical laser altimeters. Those eight parameters are received energy that is parallel and perpendicular to that of the plane-polarized transmit pulse at 532 nm (green) and 1064 nm (near IR), for both the active laser backscatter retro-reflectance and the passive solar bi-directional reflectance. This poster presentation will cover the instrument architecture and highlight the performance of the SIMPL instrument with examples taken from measurements for several sites with distinct canopy structures and compositions. Specific performance areas such as probability of detection, after pulsing, and dead time, will be highlighted and addressed, along with examples of their impact on the measurements and how they limit the ability to accurately model and recover the canopy properties. To assess the sensitivity of SIMPL's measurements to canopy properties an instrument model has been implemented in the FLIGHT radiative transfer code, based on Monte Carlo simulation of photon transport. SIMPL data collected in 2010 over the Smithsonian Environmental Research Center, MD are currently being modelled and compared to other remote sensing and in situ data sets. Results on the adaptation of FLIGHT to model micropulse, single'photon ranging measurements are presented elsewhere at this conference. NASA's ICESat-2 spaceflight mission, scheduled for launch in 2016, will utilize a multi-beam, micropulse, single-photon ranging measurement approach (although non-polarimetric and only at 532 nm). Insights gained from the analysis and modelling of SIMPL data will help guide preparations for that mission, including development of calibration/validation plans and algorithms for the estimation of forest biophysical parameters.

  16. Modeling Methods

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  17. Inference of mantle viscosity for depth resolutions of GIA observations

    NASA Astrophysics Data System (ADS)

    Nakada, Masao; Okuno, Jun'ichi

    2016-11-01

    Inference of the mantle viscosity from observations for glacial isostatic adjustment (GIA) process has usually been conducted through the analyses based on the simple three-layer viscosity model characterized by lithospheric thickness, upper- and lower-mantle viscosities. Here, we examine the viscosity structures for the simple three-layer viscosity model and also for the two-layer lower-mantle viscosity model defined by viscosities of η670,D (670-D km depth) and ηD,2891 (D-2891 km depth) with D-values of 1191, 1691 and 2191 km. The upper-mantle rheological parameters for the two-layer lower-mantle viscosity model are the same as those for the simple three-layer one. For the simple three-layer viscosity model, rate of change of degree-two zonal harmonics of geopotential due to GIA process (GIA-induced J˙2) of -(6.0-6.5) × 10-11 yr-1 provides two permissible viscosity solutions for the lower mantle, (7-20) × 1021 and (5-9) × 1022 Pa s, and the analyses with observational constraints of the J˙2 and Last Glacial Maximum (LGM) sea levels at Barbados and Bonaparte Gulf indicate (5-9) × 1022 Pa s for the lower mantle. However, the analyses for the J˙2 based on the two-layer lower-mantle viscosity model only require a viscosity layer higher than (5-10) × 1021 Pa s for a depth above the core-mantle boundary (CMB), in which the value of (5-10) × 1021 Pa s corresponds to the solution of (7-20) × 1021 Pa s for the simple three-layer one. Moreover, the analyses with the J˙2 and LGM sea level constraints for the two-layer lower-mantle viscosity model indicate two viscosity solutions: η670,1191 > 3 × 1021 and η1191,2891 ˜ (5-10) × 1022 Pa s, and η670,1691 > 1022 and η1691,2891 ˜ (5-10) × 1022 Pa s. The inferred upper-mantle viscosity for such solutions is (1-4) × 1020 Pa s similar to the estimate for the simple three-layer viscosity model. That is, these analyses require a high viscosity layer of (5-10) × 1022 Pa s at least in the deep mantle, and suggest that the GIA-based lower-mantle viscosity structure should be treated carefully in discussing the mantle dynamics related to the viscosity jump at ˜670 km depth. We also preliminarily put additional constraints on these viscosity solutions by examining typical relative sea level (RSL) changes used to infer the lower-mantle viscosity. The viscosity solution inferred from the far-field RSL changes in the Australian region is consistent with those for the J˙2 and LGM sea levels, and the analyses for RSL changes at Southport and Bermuda in the intermediate region for the North American ice sheets suggest the solution of η670,D > 1022, ηD,2891 ˜ (5-10) × 1022 Pa s (D = 1191 or 1691 km) and upper-mantle viscosity higher than 6 × 1020 Pa s.

  18. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  19. A Computational Efficient Physics Based Methodology for Modeling Ceramic Matrix Composites (Preprint)

    DTIC Science & Technology

    2011-11-01

    elastic range, and with some simple forms of progressing damage . However, a general physics-based methodology to assess the initial and lifetime... damage evolution in the RVE for all possible load histories. Microstructural data on initial configuration and damage progression in CMCs were...the damaged elements will have changed, hence, a progressive damage model. The crack opening for each crack type in each element is stored as a

  20. Simulated crop yield in response to changes in climate and agricultural practices: results from a simple process based model

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Smith, M. J.; Purves, D.; Emmott, S.

    2013-12-01

    Global agriculture will, in the future, be faced with two main challenges: climate change and an increase in global food demand driven by an increase in population and changes in consumption habits. To be able to predict both the impacts of changes in climate on crop yields and the changes in agricultural practices necessary to respond to such impacts we currently need to improve our understanding of crop responses to climate and the predictive capability of our models. Ideally, what we would have at our disposal is a modelling tool which, given certain climatic conditions and agricultural practices, can predict the growth pattern and final yield of any of the major crops across the globe. We present a simple, process-based crop growth model based on the assumption that plants allocate above- and below-ground biomass to maintain overall carbon optimality and that, to maintain this optimality, the reproductive stage begins at peak nitrogen uptake. The model includes responses to available light, water, temperature and carbon dioxide concentration as well as nitrogen fertilisation and irrigation. The model is data constrained at two sites, the Yaqui Valley, Mexico for wheat and the Southern Great Plains flux site for maize and soybean, using a robust combination of space-based vegetation data (including data from the MODIS and Landsat TM and ETM+ instruments), as well as ground-based biomass and yield measurements. We show a number of climate response scenarios, including increases in temperature and carbon dioxide concentrations as well as responses to irrigation and fertiliser application.

  1. An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers.

    PubMed

    Zapateiro De la Hoz, Mauricio; Acho, Leonardo; Vidal, Yolanda

    2015-01-01

    Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes.

  2. Segmentation by fusion of histogram-based k-means clusters in different color spaces.

    PubMed

    Mignotte, Max

    2008-05-01

    This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.

  3. A simple antireflection overcoat for opaque coatings in the submillimeter region

    NASA Technical Reports Server (NTRS)

    Smith, S. M.

    1986-01-01

    An antireflection overcoat for opaque baffle coatings in the far infrared (FIR)/submillimeter region was made from a simple Teflon spray-on lubricant. The Teflon overcoat reduced the specular reflectance of four different opaque coatings by nearly a factor of two. Analysis, based on the interference term of a reflecting-layer model, indicates that in the submillimeter region the reduced reflectance depends primarily on the refractive index of the overcoat and very little on its thickness.

  4. An efficient current-based logic cell model for crosstalk delay analysis

    NASA Astrophysics Data System (ADS)

    Nazarian, Shahin; Das, Debasish

    2013-04-01

    Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.

  5. Transformation through Expeditionary Change Using Online Learning and Competence-Building Technologies

    ERIC Educational Resources Information Center

    Norris, Donald M.; Lefrere, Paul

    2011-01-01

    This paper presents a patterns-based model of the evolution of learning and competence-building technologies, grounded in examples of current practice. The model imagines five simple stages in how institutions use "expeditionary change" to innovate more nimbly. It builds upon three assertions. First, the pervasiveness of web-based…

  6. Constrained range expansion and climate change assessments

    Treesearch

    Yohay Carmel; Curtis H. Flather

    2006-01-01

    Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...

  7. A Cash Management Model.

    ERIC Educational Resources Information Center

    Boyles, William W.

    1975-01-01

    In 1973, Ronald G. Lykins presented a model for cash management and analysed its benefits for Ohio University. This paper attempts to expand on the previous method by providing answers to questions raised by the Lykins methods by a series of simple algebraic formulas. Both methods are based on two premises: (1) all cash over which the business…

  8. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes

  9. Analytical and multibody modeling for the power analysis of standing jumps.

    PubMed

    Palmieri, G; Callegari, M; Fioretti, S

    2015-01-01

    Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.

  10. A Simple Model for the Cloud Adjacency Effect and the Apparent Bluing of Aerosols Near Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Wen, Guoyong; Coakley, James A., Jr.; Remer, Lorraine A.; Loeb,Norman G.; Cahalan, Robert F.

    2008-01-01

    In determining aerosol-cloud interactions, the properties of aerosols must be characterized in the vicinity of clouds. Numerous studies based on satellite observations have reported that aerosol optical depths increase with increasing cloud cover. Part of the increase comes from the humidification and consequent growth of aerosol particles in the moist cloud environment, but part comes from 3D cloud-radiative transfer effects on the retrieved aerosol properties. Often, discerning whether the observed increases in aerosol optical depths are artifacts or real proves difficult. The paper provides a simple model that quantifies the enhanced illumination of cloud-free columns in the vicinity of clouds that are used in the aerosol retrievals. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from enhanced Rayleigh scattering that results from the presence of the nearby clouds. The enhancement in Rayleigh scattering is estimated using a stochastic cloud model to obtain the radiative flux reflected by broken clouds and comparing this flux with that obtained with the molecules in the atmosphere causing extinction, but no scattering.

  11. Prediction and measurements of vibrations from a railway track lying on a peaty ground

    NASA Astrophysics Data System (ADS)

    Picoux, B.; Rotinat, R.; Regoin, J. P.; Le Houédec, D.

    2003-10-01

    This paper introduces a two-dimensional model for the response of the ground surface due to vibrations generated by a railway traffic. A semi-analytical wave propagation model is introduced which is subjected to a set of harmonic moving loads and based on a calculation method of the dynamic stiffness matrix of the ground. In order to model a complete railway system, the effect of a simple track model is taken into account including rails, sleepers and ballast especially designed for the study of low vibration frequencies. The priority has been given to a simple formulation based on the principle of spatial Fourier transforms compatible with good numerical efficiency and yet providing quick solutions. In addition, in situ measurements for a soft soil near a railway track were carried out and will be used to validate the numerical implementation. The numerical and experimental results constitute a significant body of useful data to, on the one hand, characterize the response of the environment of tracks and, on the other hand, appreciate the importance of the speed and weight on the behaviour of the structure.

  12. Recognizing simple polyhedron from a perspective drawing

    NASA Astrophysics Data System (ADS)

    Zhang, Guimei; Chu, Jun; Miao, Jun

    2009-10-01

    Existed methods can't be used for recognizing simple polyhedron. In this paper, three problems are researched. First, a method for recognizing triangle and quadrilateral is introduced based on geometry and angle constraint. Then Attribute Relation Graph (ARG) is employed to describe simple polyhedron and line drawing. Last, a new method is presented to recognize simple polyhedron from a line drawing. The method filters the candidate database before matching line drawing and model, thus the recognition efficiency is improved greatly. We introduced the geometrical characteristics and topological characteristics to describe each node of ARG, so the algorithm can not only recognize polyhedrons with different shape but also distinguish between polyhedrons with the same shape but with different sizes and proportions. Computer simulations demonstrate the effectiveness of the method preliminarily.

  13. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  14. A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.

    1994-01-01

    A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.

  15. Validation of a partial coherence interferometry method for estimating retinal shape

    PubMed Central

    Verkicharla, Pavan K.; Suheimat, Marwan; Pope, James M.; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L.; Atchison, David A.

    2015-01-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data. PMID:26417496

  16. Validation of a partial coherence interferometry method for estimating retinal shape.

    PubMed

    Verkicharla, Pavan K; Suheimat, Marwan; Pope, James M; Sepehrband, Farshid; Mathur, Ankit; Schmid, Katrina L; Atchison, David A

    2015-09-01

    To validate a simple partial coherence interferometry (PCI) based retinal shape method, estimates of retinal shape were determined in 60 young adults using off-axis PCI, with three stages of modeling using variants of the Le Grand model eye, and magnetic resonance imaging (MRI). Stage 1 and 2 involved a basic model eye without and with surface ray deviation, respectively and Stage 3 used model with individual ocular biometry and ray deviation at surfaces. Considering the theoretical uncertainty of MRI (12-14%), the results of the study indicate good agreement between MRI and all three stages of PCI modeling with <4% and <7% differences in retinal shapes along horizontal and vertical meridians, respectively. Stage 2 and Stage 3 gave slightly different retinal co-ordinates than Stage 1 and we recommend the intermediate Stage 2 as providing a simple and valid method of determining retinal shape from PCI data.

  17. Development and characterisation of a novel three-dimensional inter-kingdom wound biofilm model.

    PubMed

    Townsend, Eleanor M; Sherry, Leighann; Rajendran, Ranjith; Hansom, Donald; Butcher, John; Mackay, William G; Williams, Craig; Ramage, Gordon

    2016-11-01

    Chronic diabetic foot ulcers are frequently colonised and infected by polymicrobial biofilms that ultimately prevent healing. This study aimed to create a novel in vitro inter-kingdom wound biofilm model on complex hydrogel-based cellulose substrata to test commonly used topical wound treatments. Inter-kingdom triadic biofilms composed of Candida albicans, Pseudomonas aeruginosa, and Staphylococcus aureus were shown to be quantitatively greater in this model compared to a simple substratum when assessed by conventional culture, metabolic dye and live dead qPCR. These biofilms were both structurally complex and compositionally dynamic in response to topical therapy, so when treated with either chlorhexidine or povidone iodine, principal component analysis revealed that the 3-D cellulose model was minimally impacted compared to the simple substratum model. This study highlights the importance of biofilm substratum and inclusion of relevant polymicrobial and inter-kingdom components, as these impact penetration and efficacy of topical antiseptics.

  18. Estimating the Soil Temperature Profile from a Single Depth Observation: A Simple Empirical Heatflow Solution

    NASA Technical Reports Server (NTRS)

    Holmes, Thomas; Owe, Manfred; deJeu, Richard

    2007-01-01

    Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.

  19. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  20. Stability of Retained Austenite in High-Al, Low-Si TRIP-Assisted Steels Processed via Continuous Galvanizing Heat Treatments

    NASA Astrophysics Data System (ADS)

    McDermid, J. R.; Zurob, H. S.; Bian, Y.

    2011-12-01

    Two galvanizable high-Al, low-Si transformation-induced plasticity (TRIP)-assisted steels were subjected to isothermal bainitic transformation (IBT) temperatures compatible with the continuous galvanizing (CGL) process and the kinetics of the retained austenite (RA) to martensite transformation during room temperature deformation studied as a function of heat treatment parameters. It was determined that there was a direct relationship between the rate of strain-induced transformation and optimal mechanical properties, with more gradual transformation rates being favored. The RA to martensite transformation kinetics were successfully modeled using two methodologies: (1) the strain-based model of Olsen and Cohen and (2) a simple relationship with the normalized flow stress, ( {{{σ_{{flow}} - σ_{YS} }/{σ_{YS }}}} ) . For the strain-based model, it was determined that the model parameters were a strong function of strain and alloy thermal processing history and a weak function of alloy chemistry. It was verified that the strain-based model in the present work agrees well with those derived by previous workers using TRIP-assisted steels of similar composition. It was further determined that the RA to martensite transformation kinetics for all alloys and heat treatments could be described using a simple model vs the normalized flow stress, indicating that the RA to martensite transformation is stress-induced rather than strain-induced for temperatures above the Ms^{σ }.

  1. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    PubMed

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Intrinsic dimensionality predicts the saliency of natural dynamic scenes.

    PubMed

    Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt

    2012-06-01

    Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.

  3. Edge Detection Based On the Characteristic of Primary Visual Cortex Cells

    NASA Astrophysics Data System (ADS)

    Zhu, M. M.; Xu, Y. L.; Ma, H. Q.

    2018-01-01

    Aiming at the problem that it is difficult to balance the accuracy of edge detection and anti-noise performance, and referring to the dynamic and static perceptions of the primary visual cortex (V1) cells, a V1 cell model is established to perform edge detection. A spatiotemporal filter is adopted to simulate the receptive field of V1 simple cells, the model V1 cell is obtained after integrating the responses of simple cells by half-wave rectification and normalization, Then the natural image edge is detected by using static perception of V1 cells. The simulation results show that, the V1 model can basically fit the biological data and has the universality of biology. What’s more, compared with other edge detection operators, the proposed model is more effective and has better robustness

  4. The effect of resistance level and stability demands on recruitment patterns and internal loading of spine in dynamic flexion and extension using a simple trunk model.

    PubMed

    Zeinali-Davarani, Shahrokh; Shirazi-Adl, Aboulfazl; Dariush, Behzad; Hemami, Hooshang; Parnianpour, Mohamad

    2011-07-01

    The effects of external resistance on the recruitment of trunk muscles in sagittal movements and the coactivation mechanism to maintain spinal stability were investigated using a simple computational model of iso-resistive spine sagittal movements. Neural excitation of muscles was attained based on inverse dynamics approach along with a stability-based optimisation. The trunk flexion and extension movements between 60° flexion and the upright posture against various resistance levels were simulated. Incorporation of the stability constraint in the optimisation algorithm required higher antagonistic activities for all resistance levels mostly close to the upright position. Extension movements showed higher coactivation with higher resistance, whereas flexion movements demonstrated lower coactivation indicating a greater stability demand in backward extension movements against higher resistance at the neighbourhood of the upright posture. Optimal extension profiles based on minimum jerk, work and power had distinct kinematics profiles which led to recruitment patterns with different timing and amplitude of activation.

  5. Parallel constraint satisfaction in memory-based decisions.

    PubMed

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  6. Model systems for defining initiation, promotion, and progression of skin neoplasms.

    PubMed

    Boutwell, R K

    1989-01-01

    A number of items that must be considered in designing and choosing a suitable model for initiation and promotion testing have been described. Although these items may seem complex, tests for initiation and promotion are, in reality, quite simple and provide a rational approach to carcinogen testing. Several tests have been described here and Eastin, elsewhere in this book, describes the validation of a simple and highly recommended test. The processes involved in initiation and promotion are qualitatively different. The criteria for concern about possible human hazards as well as for regulation of initiators and promoters should be based on these qualitative differences. Realistic appraisal of the risk must be based on the level and nature of the potential hazard. In particular, it must be recognized that promoting action is reversible, that a threshold exists, and that promotion is readily inhibited. Therefore, animal tests that differentiate between potential initiators and promoters are essential to enable a logical assessment of human risk and the implementation of appropriate protective measures based on scientific facts.

  7. Influence of dissolved organic carbon content on modelling natural organic matter acid-base properties.

    PubMed

    Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves

    2004-10-01

    Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.

  8. DbMap: improving database interoperability issues in medical software using a simple, Java-Xml based solution.

    PubMed Central

    Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.

    2000-01-01

    In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915

  9. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  10. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  11. Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?

    USGS Publications Warehouse

    Geier, J.; Voss, C.I.; Dverstorp, B.

    2002-01-01

    A simple evaluation can be used to characterize the capacity of crystalline bedrock to act as a barrier to release radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealized models of pore geometry. Application to an intensively characterized site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geological barrier performance. Comparison with seven other less intensively characterized crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterized by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.

  12. Conceptual uncertainty in crystalline bedrock: Is simple evaluation the only practical approach?

    USGS Publications Warehouse

    Geier, J.; Voss, C.I.; Dverstorp, B.

    2002-01-01

    A simple evaluation can be used to characterise the capacity of crystalline bedrock to act as a barrier to releases of radionuclides from a nuclear waste repository. Physically plausible bounds on groundwater flow and an effective transport-resistance parameter are estimated based on fundamental principles and idealised models of pore geometry. Application to an intensively characterised site in Sweden shows that, due to high spatial variability and uncertainty regarding properties of transport paths, the uncertainty associated with the geological barrier is too high to allow meaningful discrimination between good and poor performance. Application of more complex (stochastic-continuum and discrete-fracture-network) models does not yield a significant improvement in the resolution of geologic-barrier performance. Comparison with seven other less intensively characterised crystalline study sites in Sweden leads to similar results, raising a question as to what extent the geological barrier function can be characterised by state-of-the art site investigation methods prior to repository construction. A simple evaluation provides a simple and robust practical approach for inclusion in performance assessment.

  13. Estimation of a simple agent-based model of financial markets: An application to Australian stock and foreign exchange data

    NASA Astrophysics Data System (ADS)

    Alfarano, Simone; Lux, Thomas; Wagner, Friedrich

    2006-10-01

    Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.

  14. Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser

    NASA Technical Reports Server (NTRS)

    Monson, D. J.

    1977-01-01

    The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.

  15. An Approximation to the Adaptive Exponential Integrate-and-Fire Neuron Model Allows Fast and Predictive Fitting to Physiological Data.

    PubMed

    Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel

    2012-01-01

    For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  16. Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.; Esmaeili, S.

    2015-12-01

    We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.

  17. Précis of Simple heuristics that make us smart.

    PubMed

    Todd, P M; Gigerenzer, G

    2000-10-01

    How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.

  18. Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.

    PubMed

    Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J

    2015-02-01

    The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Uncertainty about fundamentals and herding behavior in the FOREX market

    NASA Astrophysics Data System (ADS)

    Kaltwasser, Pablo Rovira

    2010-03-01

    It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.

  20. Defining strategies to win in the Internet market

    NASA Astrophysics Data System (ADS)

    López, Luis; Sanjuán, Miguel A. F.

    2001-12-01

    This paper analyzes a model for the competition dynamics of web sites in the Internet, based on the Lotka-Volterra competition equations. This model shows the well known appearance of a winner-take-all characteristic and is based in the nonvalidity of traditional offer and demand equilibrium theory of these kinds of markets. From the stability analysis of the model, we establish a series of rules which are useful for defining strategies in the Internet market. One of the most important results that emerge from this simple model is the appearance of some unexpected phenomena related to the collaboration and competition between sites.

  1. A model-based analysis of a display for helicopter landing approach. [control theoretical model of human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Wheat, L. W.

    1975-01-01

    A control theoretic model of the human pilot was used to analyze a baseline electronic cockpit display in a helicopter landing approach task. The head down display was created on a stroke written cathode ray tube and the vehicle was a UH-1H helicopter. The landing approach task consisted of maintaining prescribed groundspeed and glideslope in the presence of random vertical and horizontal turbulence. The pilot model was also used to generate and evaluate display quickening laws designed to improve pilot vehicle performance. A simple fixed base simulation provided comparative tracking data.

  2. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    NASA Astrophysics Data System (ADS)

    Nienhuis, Jaap H.; Ashton, Andrew D.; Kettner, Albert J.; Giosan, Liviu

    2017-09-01

    The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal) dynamics or allogenic (external) forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  3. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  4. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  5. The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)

    ERIC Educational Resources Information Center

    Shahmohammadi, Nayereh

    2017-01-01

    This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…

  6. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  7. Model-Based Reinforcement Learning under Concurrent Schedules of Reinforcement in Rodents

    ERIC Educational Resources Information Center

    Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan

    2009-01-01

    Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…

  8. SIMPLE ANALYTICAL MODEL FOR HEAT FLOW IN FRACTURES-APPLICATION TO STEAM ENHANCED REMEDIATION CONDUCTED IN FRACTURED ROCK

    EPA Science Inventory

    Remediation of fractured rock sites contaminated by non-aqueous phase liquids has long been recognized as the most difficult undertaking of any site clean-up. Recent pilot studies conducted at the Edwards Air Force Base in California and the former Loring Air Force Base in Maine ...

  9. SIMPLE ANALYTICAL MODEL FOR HEAT FLOW IN FRACTURES - APPLICATION TO STEAM ENHANCED REMEDIATION CONDUCTED IN FRACTURED ROCK

    EPA Science Inventory

    Remediation of fractured rock sites contaminated by non-aqueous phase liquids has long been recognized as the most difficult undertaking of any site clean-up. Recent pilot studies conducted at the Edwards Air Force Base in California and the former Loring Air Force Base in Maine ...

  10. Electrical properties of III-Nitride LEDs: Recombination-based injection model and theoretical limits to electrical efficiency and electroluminescent cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Aurelien, E-mail: adavid@soraa.com; Hurni, Christophe A.; Young, Nathan G.

    The current-voltage characteristic and ideality factor of III-Nitride quantum well light-emitting diodes (LEDs) grown on bulk GaN substrates are investigated. At operating temperature, these electrical properties exhibit a simple behavior. A model in which only active-region recombinations have a contribution to the LED current is found to account for experimental results. The limit of LED electrical efficiency is discussed based on the model and on thermodynamic arguments, and implications for electroluminescent cooling are examined.

  11. AR(p) -based detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Alvarez-Ramirez, J.; Rodriguez, E.

    2018-07-01

    Autoregressive models are commonly used for modeling time-series from nature, economics and finance. This work explored simple autoregressive AR(p) models to remove long-term trends in detrended fluctuation analysis (DFA). Crude oil prices and bitcoin exchange rate were considered, with the former corresponding to a mature market and the latter to an emergent market. Results showed that AR(p) -based DFA performs similar to traditional DFA. However, the former DFA provides information on stability of long-term trends, which is valuable for understanding and quantifying the dynamics of complex time series from financial systems.

  12. Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?

    PubMed

    Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim

    2009-10-09

    We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.

  13. Post-game analysis: An initial experiment for heuristic-based resource management in concurrent systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.

    1987-01-01

    In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.

  14. The Productivity Dilemma in Workplace Health Promotion.

    PubMed

    Cherniack, Martin

    2015-01-01

    Worksite-based programs to improve workforce health and well-being (Workplace Health Promotion (WHP)) have been advanced as conduits for improved worker productivity and decreased health care costs. There has been a countervailing health economics contention that return on investment (ROI) does not merit preventive health investment. METHODS/PROCEDURES: Pertinent studies were reviewed and results reconsidered. A simple economic model is presented based on conventional and alternate assumptions used in cost benefit analysis (CBA), such as discounting and negative value. The issues are presented in the format of 3 conceptual dilemmas. In some occupations such as nursing, the utility of patient survival and staff health is undervalued. WHP may miss important components of work related health risk. Altering assumptions on discounting and eliminating the drag of negative value radically change the CBA value. Simple monetization of a work life and calculation of return on workforce health investment as a simple alternate opportunity involve highly selective interpretations of productivity and utility.

  15. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  16. Chesapeake Bay Sediment Flux Model

    DTIC Science & Technology

    1993-06-01

    1988; Van der Molen , -88- 1991; Yoshida, 1981.) The model developed below is based on both of these approaches. It incorporates the diagenetic...288: pp. 289-333. Van der Molen , D.T. (1991): A simple, dynamic model for the simulation of the release of phosphorus from sediments in shallow...1974; Berner, 1980; van Cappellen and Berner, 1988). These relate the diagenetic production of phosphate to the resulting pore water concentration

  17. Tuning of PID controllers for boiler-turbine units.

    PubMed

    Tan, Wen; Liu, Jizhen; Fang, Fang; Chen, Yanqiao

    2004-10-01

    A simple two-by-two model for a boiler-turbine unit is demonstrated in this paper. The model can capture the essential dynamics of a unit. The design of a coordinated controller is discussed based on this model. A PID control structure is derived, and a tuning procedure is proposed. The examples show that the method is easy to apply and can achieve acceptable performance.

  18. Investigation of shear damage considering the evolution of anisotropy

    NASA Astrophysics Data System (ADS)

    Kweon, S.

    2013-12-01

    The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.

  19. Simple models for studying complex spatiotemporal patterns of animal behavior

    NASA Astrophysics Data System (ADS)

    Tyutyunov, Yuri V.; Titova, Lyudmila I.

    2017-06-01

    Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.

  20. Simple predictive model for Early Childhood Caries of Chilean children.

    PubMed

    Fierro Monti, Claudia; Pérez Flores, M; Brunotto, M

    2014-01-01

    Early Childhood Caries (ECC), in both industrialized and developing countries, is the most prevalent chronic disease in childhood and it is still a health public problem, affecting mainly populations considered as vulnerable, despite being preventable. The purpose of this study was to obtain a simple predictive model based on risk factors for improving public health strategies for ECC prevention for 3-5 year-old children. Clinical, environmental and psycho-socio-cultural data of children (n=250) aged 3-5 years, of both genders, from the Health Centers, were recorded in a Clinical History and Behavioral Survey. 24% of children presented behavioral problems (bizarre behavior was the main feature observed as behavioral problems). The variables associated to dmf ?4 were: bad children temperament (OR=2.43 [1.34, 4.40]) and home stress (OR=3.14 [1.54, 6.41]). It was observed that the model for male gender has higher accuracy for ECC (AUC= 78%, p-value=0.000) than others. Based on the results, we proposed a model where oral hygiene, sugar intake, male gender, and difficult temperament are main factors for predicting ECC. This model could be a promising tool for cost-effective early childhood caries control.

  1. The Mine Locomotive Wireless Network Strategy Based on Successive Interference Cancellation

    PubMed Central

    Wu, Liaoyuan; Han, Jianghong; Wei, Xing; Shi, Lei; Ding, Xu

    2015-01-01

    We consider a wireless network strategy based on successive interference cancellation (SIC) for mine locomotives. We firstly build the original mathematical model for the strategy which is a non-convex model. Then, we examine this model intensively, and figure out that there are certain regulations embedded in it. Based on these findings, we are able to reformulate the model into a new form and design a simple algorithm which can assign each locomotive with a proper transmitting scheme during the whole schedule procedure. Simulation results show that the outcomes obtained through this algorithm are improved by around 50% compared with those that do not apply the SIC technique. PMID:26569240

  2. On the modelling of shallow turbidity flows

    NASA Astrophysics Data System (ADS)

    Liapidevskii, Valery Yu.; Dutykh, Denys; Gisclon, Marguerite

    2018-03-01

    In this study we investigate shallow turbidity density currents and underflows from mechanical point of view. We propose a simple hyperbolic model for such flows. On one hand, our model is based on very basic conservation principles. On the other hand, the turbulent nature of the flow is also taken into account through the energy dissipation mechanism. Moreover, the mixing with the pure water along with sediments entrainment and deposition processes are considered, which makes the problem dynamically interesting. One of the main advantages of our model is that it requires the specification of only two modeling parameters - the rate of turbulent dissipation and the rate of the pure water entrainment. Consequently, the resulting model turns out to be very simple and self-consistent. This model is validated against several experimental data and several special classes of solutions (such as travelling, self-similar and steady) are constructed. Unsteady simulations show that some special solutions are realized as asymptotic long time states of dynamic trajectories.

  3. A Data-driven Approach for Forecasting Next-day River Discharge

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Billah, K. S.

    2017-12-01

    This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.

  4. Identification of cracks in thick beams with a cracked beam element model

    NASA Astrophysics Data System (ADS)

    Hou, Chuanchuan; Lu, Yong

    2016-12-01

    The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.

  5. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  6. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  7. Modeling helical proteins using residual dipolar couplings, sparse long-range distance constraints and a simple residue-based force field

    PubMed Central

    Eggimann, Becky L.; Vostrikov, Vitaly V.; Veglia, Gianluigi; Siepmann, J. Ilja

    2013-01-01

    We present a fast and simple protocol to obtain moderate-resolution backbone structures of helical proteins. This approach utilizes a combination of sparse backbone NMR data (residual dipolar couplings and paramagnetic relaxation enhancements) or EPR data with a residue-based force field and Monte Carlo/simulated annealing protocol to explore the folding energy landscape of helical proteins. By using only backbone NMR data, which are relatively easy to collect and analyze, and strategically placed spin relaxation probes, we show that it is possible to obtain protein structures with correct helical topology and backbone RMS deviations well below 4 Å. This approach offers promising alternatives for the structural determination of proteins in which nuclear Overha-user effect data are difficult or impossible to assign and produces initial models that will speed up the high-resolution structure determination by NMR spectroscopy. PMID:24639619

  8. Reinforced communication and social navigation: Remember your friends and remember yourself

    NASA Astrophysics Data System (ADS)

    Mirshahvalad, A.; Rosvall, M.

    2011-09-01

    In social systems, people communicate with each other and form groups based on their interests. The pattern of interactions, the network, and the ideas that flow on the network naturally evolve together. Researchers use simple models to capture the feedback between changing network patterns and ideas on the network, but little is understood about the role of past events in the feedback process. Here, we introduce a simple agent-based model to study the coupling between peoples’ ideas and social networks, and better understand the role of history in dynamic social networks. We measure how information about ideas can be recovered from information about network structure and, the other way around, how information about network structure can be recovered from information about ideas. We find that it is, in general, easier to recover ideas from the network structure than vice versa.

  9. Incorporating User Input in Template-Based Segmentation

    PubMed Central

    Vidal, Camille; Beggs, Dale; Younes, Laurent; Jain, Sanjay K.; Jedynak, Bruno

    2015-01-01

    We present a simple and elegant method to incorporate user input in a template-based segmentation method for diseased organs. The user provides a partial segmentation of the organ of interest, which is used to guide the template towards its target. The user also highlights some elements of the background that should be excluded from the final segmentation. We derive by likelihood maximization a registration algorithm from a simple statistical image model in which the user labels are modeled as Bernoulli random variables. The resulting registration algorithm minimizes the sum of square differences between the binary template and the user labels, while preventing the template from shrinking, and penalizing for the inclusion of background elements into the final segmentation. We assess the performance of the proposed algorithm on synthetic images in which the amount of user annotation is controlled. We demonstrate our algorithm on the segmentation of the lungs of Mycobacterium tuberculosis infected mice from μCT images. PMID:26146532

  10. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  11. Interpretation with a Donnan-based concept of the influence of simple salt concentration on the apparent binding of divalent ions to the polyelectrolytes polystyrenesulfonate and dextran sulfate

    USGS Publications Warehouse

    Marinsky, J.A.; Baldwin, Robert F.; Reddy, M.M.

    1985-01-01

    It has been shown that the apparent enhancement of divalent metal ion binding to polyions such as polystyrenesulfonate (PSS) and dextran sulfate (DS) by decreasing the ionic strength of these mixed counterion systems (M2+, M+, X-, polyion) can be anticipated with the Donnan-based model developed by one of us (J.A.M.). Ion-exchange distribution methods have been employed to measure the removal by the polyion of trace divalent metal ion from simple salt (NaClO4)-polyion (NaPSS) mixtures. These data and polyion interaction data published earlier by Mattai and Kwak for the mixed counterion systems MgCl2-LiCl-DS and MgCl2-CsCl-DS have been shown to be amenable to rather precise analysis by this model. ?? 1985 American Chemical Society.

  12. Data requirements to model creep in 9Cr-1Mo-V steel

    NASA Technical Reports Server (NTRS)

    Swindeman, R. W.

    1988-01-01

    Models for creep behavior are helpful in predicting response of components experiencing stress redistributions due to cyclic loads, and often the analyst would like information that correlates strain rate with history assuming simple hardening rules such as those based on time or strain. On the one hand, much progress has been made in the development of unified constitutive equations that include both hardening and softening through the introduction of state variables whose evolutions are history dependent. Although it is difficult to estimate specific data requirements for general application, there are several simple measurements that can be made in the course of creep testing and results reported in data bases. The issue is whether or not such data could be helpful in developing unified equations, and, if so, how should such data be reported. Data produced on a martensitic 9Cr-1Mo-V-Nb steel were examined with these issues in mind.

  13. Using simple agent-based modeling to inform and enhance neighborhood walkability

    PubMed Central

    2013-01-01

    Background Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory ‘what-if’ scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. Methods This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. Results The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. Conclusions The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) ‘learning’ and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume). PMID:24330721

  14. Multiannual forecasting of seasonal influenza dynamics reveals climatic and evolutionary drivers.

    PubMed

    Axelsen, Jacob Bock; Yaari, Rami; Grenfell, Bryan T; Stone, Lewi

    2014-07-01

    Human influenza occurs annually in most temperate climatic zones of the world, with epidemics peaking in the cold winter months. Considerable debate surrounds the relative role of epidemic dynamics, viral evolution, and climatic drivers in driving year-to-year variability of outbreaks. The ultimate test of understanding is prediction; however, existing influenza models rarely forecast beyond a single year at best. Here, we use a simple epidemiological model to reveal multiannual predictability based on high-quality influenza surveillance data for Israel; the model fit is corroborated by simple metapopulation comparisons within Israel. Successful forecasts are driven by temperature, humidity, antigenic drift, and immunity loss. Essentially, influenza dynamics are a balance between large perturbations following significant antigenic jumps, interspersed with nonlinear epidemic dynamics tuned by climatic forcing.

  15. Deterministic diffusion in flower-shaped billiards.

    PubMed

    Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre

    2002-08-01

    We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.

  16. Computational Models of Laryngeal Aerodynamics: Potentials and Numerical Costs.

    PubMed

    Sadeghi, Hossein; Kniesburges, Stefan; Kaltenbacher, Manfred; Schützenberger, Anne; Döllinger, Michael

    2018-02-07

    Human phonation is based on the interaction between tracheal airflow and laryngeal dynamics. This fluid-structure interaction is based on the energy exchange between airflow and vocal folds. Major challenges in analyzing the phonatory process in-vivo are the small dimensions and the poor accessibility of the region of interest. For improved analysis of the phonatory process, numerical simulations of the airflow and the vocal fold dynamics have been suggested. Even though most of the models reproduced the phonatory process fairly well, development of comprehensive larynx models is still a subject of research. In the context of clinical application, physiological accuracy and computational model efficiency are of great interest. In this study, a simple numerical larynx model is introduced that incorporates the laryngeal fluid flow. It is based on a synthetic experimental model with silicone vocal folds. The degree of realism was successively increased in separate computational models and each model was simulated for 10 oscillation cycles. Results show that relevant features of the laryngeal flow field, such as glottal jet deflection, develop even when applying rather simple static models with oscillating flow rates. Including further phonatory components such as vocal fold motion, mucosal wave propagation, and ventricular folds, the simulations show phonatory key features like intraglottal flow separation and increased flow rate in presence of ventricular folds. The simulation time on 100 CPU cores ranged between 25 and 290 hours, currently restricting clinical application of these models. Nevertheless, results show high potential of numerical simulations for better understanding of phonatory process. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Towards a Simple and Efficient Web Search Framework

    DTIC Science & Technology

    2014-11-01

    any useful information about the various aspects of a topic. For example, for the query “ raspberry pi ”, it covers topics such as “what is raspberry pi ...topics generated by the LDA topic model for query ” raspberry pi ”. One simple explanation is that web texts are too noisy and unfocused for the LDA process...making a rasp- berry pi ”. However, the topics generated based on the 10 top ranked documents do not make much sense to us in terms of their keywords

  18. Influence of parameter changes to stability behavior of rotors

    NASA Technical Reports Server (NTRS)

    Fritzen, C. P.; Nordmann, R.

    1982-01-01

    The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.

  19. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  20. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    NASA Astrophysics Data System (ADS)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of stochasticity may blur most of the deterministic time features, such as long-term trend and synchronization among nearby coupled faults.

  1. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  2. Bioheat model evaluations of laser effects on tissues: role of water evaporation and diffusion

    NASA Astrophysics Data System (ADS)

    Nagulapally, Deepthi; Joshi, Ravi P.; Thomas, Robert J.

    2011-03-01

    A two-dimensional, time-dependent bioheat model is applied to evaluate changes in temperature and water content in tissues subjected to laser irradiation. Our approach takes account of liquid-to-vapor phase changes and a simple diffusive flow of water within the biotissue. An energy balance equation considers blood perfusion, metabolic heat generation, laser absorption, and water evaporation. The model also accounts for the water dependence of tissue properties (both thermal and optical), and variations in blood perfusion rates based on local tissue injury. Our calculations show that water diffusion would reduce the local temperature increases and hot spots in comparison to simple models that ignore the role of water in the overall thermal and mass transport. Also, the reduced suppression of perfusion rates due to tissue heating and damage with water diffusion affect the necrotic depth. Two-dimensional results for the dynamic temperature, water content, and damage distributions will be presented for skin simulations. It is argued that reduction in temperature gradients due to water diffusion would mitigate local refractive index variations, and hence influence the phenomenon of thermal lensing. Finally, simple quantitative evaluations of pressure increases within the tissue due to laser absorption are presented.

  3. Modeling of the merging of two colliding field reversed configuration plasmoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu

    2016-06-15

    The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less

  4. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  5. Influence of pH on Drug Absorption from the Gastrointestinal Tract: A Simple Chemical Model

    NASA Astrophysics Data System (ADS)

    Hickman, Raymond J. S.; Neill, Jane

    1997-07-01

    A simple model of the gastrointestinal tract is obtained by placing ethyl acetate in contact with water at pH 2 and pH 8 in separate test tubes. The ethyl acetate corresponds to the lipid material lining the tract while the water corresponds to the aqueous contents of the stomach (pH 2) and intestine (pH 8). The compounds aspirin, paracetamol and 3-aminophenol are used as exemplars of acidic, neutral and basic drugs respectively to illustrate the influence which pH has on the distribution of each class of drug between the aqueous and organic phases of the model. The relative concentration of drug in the ethyl acetate is judged by applying microlitre-sized samples of ethyl acetate to a layer of fluorescent silica which, after evaporation of the ethyl acetate, is viewed under an ultraviolet lamp. Each of the three drugs, if present in the ethyl acetate, becomes visible as a dark spot on the silica layer. The observations made in the model system correspond well to the patterns of drug absorption from the gastrointestinal tract described in pharmacology texts and these observations are convincingly explained in terms of simple acid-base chemistry.

  6. Operator priming and generalization of practice in adults' simple arithmetic.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2016-04-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).

  7. Simple Model of Mating Preference and Extinction Risk

    NASA Astrophysics Data System (ADS)

    PȨKALSKI, Andrzej

    We present a simple model of a population of individuals characterized by their genetic structure in the form of a double string of bits and the phenotype following from it. The population is living in an unchanging habitat preferring a certain type of phenotype (optimum). Individuals are unisex, however a pair is necessary for breeding. An individual rejects a mate if the latter's phenotype contains too many bad, i.e. different from the optimum, genes in the same places as the individual's. We show that such strategy, analogous to disassortative mating based on the major histocompatibility complex, avoiding inbreeding and incest, could be beneficial for the population and could reduce considerably the extinction risk, especially in small populations.

  8. Stabilization and tracking control of X-Z inverted pendulum with sliding-mode control.

    PubMed

    Wang, Jia-Jun

    2012-11-01

    X-Z inverted pendulum is a new kind of inverted pendulum which can move with the combination of the vertical and horizontal forces. Through a new transformation, the X-Z inverted pendulum is decomposed into three simple models. Based on the simple models, sliding-mode control is applied to stabilization and tracking control of the inverted pendulum. The performance of the sliding mode control is compared with that of the PID control. Simulation results show that the design scheme of sliding-mode control is effective for the stabilization and tracking control of the X-Z inverted pendulum. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Neurogenesis Interferes with the Retrieval of Remote Memories: Forgetting in Neurocomputational Terms

    ERIC Educational Resources Information Center

    Weisz, Victoria I.; Argibay, Pablo F.

    2012-01-01

    In contrast to models and theories that relate adult neurogenesis with the processes of learning and memory, almost no solid hypotheses have been formulated that involve a possible neurocomputational influence of adult neurogenesis on forgetting. Based on data from a previous study that implemented a simple but complete model of the main…

  10. Can Neuroscience Help Us Do a Better Job of Teaching Music?

    ERIC Educational Resources Information Center

    Hodges, Donald A.

    2010-01-01

    We are just at the beginning stages of applying neuroscientific findings to music teaching. A simple model of the learning cycle based on neuroscience is Sense [right arrow] Integrate [right arrow] Act (sometimes modified as Act [right arrow] Sense [right arrow] Integrate). Additional components can be added to the model, including such concepts…

  11. Model Experiment of Thermal Runaway Reactions Using the Aluminum-Hydrochloric Acid Reaction

    ERIC Educational Resources Information Center

    Kitabayashi, Suguru; Nakano, Masayoshi; Nishikawa, Kazuyuki; Koga, Nobuyoshi

    2016-01-01

    A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a model reaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

  12. Helping Students Assess the Relative Importance of Different Intermolecular Interactions

    ERIC Educational Resources Information Center

    Jasien, Paul G.

    2008-01-01

    A semi-quantitative model has been developed to estimate the relative effects of dispersion, dipole-dipole interactions, and H-bonding on the normal boiling points ("T[subscript b]") for a subset of simple organic systems. The model is based upon a statistical analysis using multiple linear regression on a series of straight-chain organic…

  13. Nonequilibrium Green's functions and atom-surface dynamics: Simple views from a simple model system

    NASA Astrophysics Data System (ADS)

    Boström, E.; Hopjan, M.; Kartsev, A.; Verdozzi, C.; Almbladh, C.-O.

    2016-03-01

    We employ Non-equilibrium Green's functions (NEGF) to describe the real-time dynamics of an adsorbate-surface model system exposed to ultrafast laser pulses. For a finite number of electronic orbitals, the system is solved exactly and within different levels of approximation. Specifically i) the full exact quantum mechanical solution for electron and nuclear degrees of freedom is used to benchmark ii) the Ehrenfest approximation (EA) for the nuclei, with the electron dynamics still treated exactly. Then, using the EA, electronic correlations are treated with NEGF within iii) 2nd Born and with iv) a recently introduced hybrid scheme, which mixes 2nd Born self-energies with non-perturbative, local exchange- correlation potentials of Density Functional Theory (DFT). Finally, the effect of a semi-infinite substrate is considered: we observe that a macroscopic number of de-excitation channels can hinder desorption. While very preliminary in character and based on a simple and rather specific model system, our results clearly illustrate the large potential of NEGF to investigate atomic desorption, and more generally, the non equilibrium dynamics of material surfaces subject to ultrafast laser fields.

  14. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  15. A simplified gis-based model for large wood recruitment and connectivity in mountain basins

    NASA Astrophysics Data System (ADS)

    Franceschi, Silvia; Antonello, Andrea; Vela, Ana Lucia; Cavalli, Marco; Crema, Stefano; Comiti, Francesco; Tonon, Giustino

    2015-04-01

    During the last 50 years in the Alps the decline of the rural and forest economy and at the depopulation of the mountain areas caused the progressive abandon of the land in general and in particular of the riparian zones and the consequent increment of the vegetation extension. On one hand the wood increases the availability of organic matter and has positive effects on mountain river systems. However, during flooding events large woods that reach the stream cause the clogging of bridges with an increase of flood hazard. The approach to the evaluation of the availability of large wood during flooding events is still a challenge. There are models that simulate the propagation of the logs downstream, but the evaluation of the trees that can reach the stream is still done using simplified GIS procedures. These procedures are the base for our research which will include LiDAR derived information on vegetation to evaluate large wood recruitment extreme events. Within the last Google Summer of Code (2014) we developed a set of tools to evaluate large wood recruitment and propagation along the channel network based on a simplified methodology for monitoring and modeling large wood recruitment and transport in mountain basins implemented by Lucía et 2014. These tools are integrated in the JGrassTools project as a dedicated section in the Hydro-Geomorphology library. The section LWRecruitment contains 10 simple modules that allow the user to start from very simple information related to geomorphology, flooding areas and vegetation cover and obtain a map of the most probable critical sections on the streams. The tools cover the two main aspects related to the iteration of large wood with the rivers: the recruitment mechanisms and the propagation downstream. While the propagation tool is very simple and does not consider the hydrodynamic of the problem, the recruitment algorithms are more specific and consider the influence of hillslopes stability and the flooding extension. The modules are available for download at www.jgrasstools.org. A simple and easy to use graphical interface to run the models is available at https://github.com/moovida/STAGE/releases.

  16. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  17. Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas

    NASA Astrophysics Data System (ADS)

    Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.

    2003-04-01

    Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.

  18. Jet engine performance enhancement through use of a wave-rotor topping cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Paxson, Daniel E.

    1993-01-01

    A simple model is used to calculate the thermal efficiency and specific power of simple jet engines and jet engines with a wave-rotor topping cycle. The performance of the wave rotor is based on measurements from a previous experiment. Applied to the case of an aircraft flying at Mach 0.8, the calculations show that an engine with a wave rotor topping cycle may have gains in thermal efficiency of approximately 1 to 2 percent and gains in specific power of approximately 10 to 16 percent over a simple jet engine with the same overall compression ratio. Even greater gains are possible if the wave rotor's performance can be improved.

  19. Preliminary study of the effect of the turbulent flow field around complex surfaces on their acoustic characteristics

    NASA Technical Reports Server (NTRS)

    Olsen, W. A.; Boldman, D.

    1978-01-01

    Fairly extensive measurements have been conducted of the turbulent flow around various surfaces as a basis for a study of the acoustic characteristics involved. In the experiments the flow from a nozzle was directed upon various two-dimensional surface configurations such as the three-flap model. A turbulent flow field description is given and an estimate of the acoustic characteristics is provided. The developed equations are based upon fundamental theories for simple configurations having simple flows. Qualitative estimates are obtained regarding the radiation pattern and the velocity power law. The effect of geometry and turbulent flow distribution on the acoustic emission from simple configurations are discussed.

  20. Interaction dynamics of multiple mobile robots with simple navigation strategies

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  1. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    PubMed

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Investigating the Effect of Damage Progression Model Choice on Prognostics Performance

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2011-01-01

    The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.

  3. Nonlinear Friction Compensation of Ball Screw Driven Stage Based on Variable Natural Length Spring Model and Disturbance Observer

    NASA Astrophysics Data System (ADS)

    Asaumi, Hiroyoshi; Fujimoto, Hiroshi

    Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.

  4. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  5. A comparison of simple global kinetic models for coal devolatilization with the CPD model

    DOE PAGES

    Richards, Andrew P.; Fletcher, Thomas H.

    2016-08-01

    Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less

  6. Inductive Reasoning about Causally Transmitted Properties

    ERIC Educational Resources Information Center

    Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D.; Tenenbaum, Joshua B.

    2008-01-01

    Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates'…

  7. Calculation of density of states for modeling photoemission using method of moments

    NASA Astrophysics Data System (ADS)

    Finkenstadt, Daniel; Lambrakos, Samuel G.; Jensen, Kevin L.; Shabaev, Andrew; Moody, Nathan A.

    2017-09-01

    Modeling photoemission using the Moments Approach (akin to Spicer's "Three Step Model") is often presumed to follow simple models for the prediction of two critical properties of photocathodes: the yield or "Quantum Efficiency" (QE), and the intrinsic spreading of the beam or "emittance" ɛnrms. The simple models, however, tend to obscure properties of electrons in materials, the understanding of which is necessary for a proper prediction of a semiconductor or metal's QE and ɛnrms. This structure is characterized by localized resonance features as well as a universal trend at high energy. Presented in this study is a prototype analysis concerning the density of states (DOS) factor D(E) for Copper in bulk to replace the simple three-dimensional form of D(E) = (m/π2 h3)p2mE currently used in the Moments approach. This analysis demonstrates that excited state spectra of atoms, molecules and solids based on density-functional theory can be adapted as useful information for practical applications, as well as providing theoretical interpretation of density-of-states structure, e.g., qualitatively good descriptions of optical transitions in matter, in addition to DFT's utility in providing the optical constants and material parameters also required in the Moments Approach.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellors, R J

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less

  9. Foreshock and aftershocks in simple earthquake models.

    PubMed

    Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R

    2015-02-27

    Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.

  10. Model validation of simple-graph representations of metabolism

    PubMed Central

    Holme, Petter

    2009-01-01

    The large-scale properties of chemical reaction systems, such as metabolism, can be studied with graph-based methods. To do this, one needs to reduce the information, lists of chemical reactions, available in databases. Even for the simplest type of graph representation, this reduction can be done in several ways. We investigate different simple network representations by testing how well they encode information about one biologically important network structure—network modularity (the propensity for edges to be clustered into dense groups that are sparsely connected between each other). To achieve this goal, we design a model of reaction systems where network modularity can be controlled and measure how well the reduction to simple graphs captures the modular structure of the model reaction system. We find that the network types that best capture the modular structure of the reaction system are substrate–product networks (where substrates are linked to products of a reaction) and substance networks (with edges between all substances participating in a reaction). Furthermore, we argue that the proposed model for reaction systems with tunable clustering is a general framework for studies of how reaction systems are affected by modularity. To this end, we investigate statistical properties of the model and find, among other things, that it recreates correlations between degree and mass of the molecules. PMID:19158012

  11. A Simplified Model for Detonation Based Pressure-Gain Combustors

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2010-01-01

    A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.

  12. Numerical techniques for the solution of the compressible Navier-Stokes equations and implementation of turbulence models. [separated turbulent boundary layer flow problems

    NASA Technical Reports Server (NTRS)

    Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.

    1975-01-01

    The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.

  13. A Simple Engineering Analysis of Solar Particle Event High Energy Tails and Their Impact on Vehicle Design

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C., Jr.; Walker, Steven A.; Clowdsley, Martha S.

    2016-01-01

    The mathematical models for Solar Particle Event (SPE) high energy tails are constructed with several di erent algorithms. Since limited measured data exist above energies around 400 MeV, this paper arbitrarily de nes the high energy tail as any proton with an energy above 400 MeV. In order to better understand the importance of accurately modeling the high energy tail for SPE spectra, the contribution to astronaut whole body e ective dose equivalent of the high energy portions of three di erent SPE models has been evaluated. To ensure completeness of this analysis, simple and complex geometries were used. This analysis showed that the high energy tail of certain SPEs can be relevant to astronaut exposure and hence safety. Therefore, models of high energy tails for SPEs should be well analyzed and based on data if possible.

  14. Data-driven outbreak forecasting with a simple nonlinear growth model.

    PubMed

    Lega, Joceline; Brown, Heidi E

    2016-12-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. An Information-theoretic Approach to Optimize JWST Observations and Retrievals of Transiting Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Howe, Alex R.; Burrows, Adam; Deming, Drake

    2017-01-01

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  16. Simple animal models for amyotrophic lateral sclerosis drug discovery.

    PubMed

    Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre

    2016-08-01

    Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.

  17. Monte Carlo based statistical power analysis for mediation models: methods and software.

    PubMed

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  18. Predictive power of food web models based on body size decreases with trophic complexity.

    PubMed

    Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo

    2018-05-01

    Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2  = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.

  19. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-11-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  20. A simple, approximate model of parachute inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macha, J.M.

    1992-01-01

    A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less

  1. Full quantum mechanical analysis of atomic three-grating Mach–Zehnder interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanz, A.S., E-mail: asanz@iff.csic.es; Davidović, M.; Božić, M.

    2015-02-15

    Atomic three-grating Mach–Zehnder interferometry constitutes an important tool to probe fundamental aspects of the quantum theory. There is, however, a remarkable gap in the literature between the oversimplified models and robust numerical simulations considered to describe the corresponding experiments. Consequently, the former usually lead to paradoxical scenarios, such as the wave–particle dual behavior of atoms, while the latter make difficult the data analysis in simple terms. Here these issues are tackled by means of a simple grating working model consisting of evenly-spaced Gaussian slits. As is shown, this model suffices to explore and explain such experiments both analytically and numerically,more » giving a good account of the full atomic journey inside the interferometer, and hence contributing to make less mystic the physics involved. More specifically, it provides a clear and unambiguous picture of the wavefront splitting that takes place inside the interferometer, illustrating how the momentum along each emerging diffraction order is well defined even though the wave function itself still displays a rather complex shape. To this end, the local transverse momentum is also introduced in this context as a reliable analytical tool. The splitting, apart from being a key issue to understand atomic Mach–Zehnder interferometry, also demonstrates at a fundamental level how wave and particle aspects are always present in the experiment, without incurring in any contradiction or interpretive paradox. On the other hand, at a practical level, the generality and versatility of the model and methodology presented, makes them suitable to attack analogous problems in a simple manner after a convenient tuning. - Highlights: • A simple model is proposed to analyze experiments based on atomic Mach–Zehnder interferometry. • The model can be easily handled both analytically and computationally. • A theoretical analysis based on the combination of the position and momentum representations is considered. • Wave and particle aspects are shown to coexist within the same experiment, thus removing the old wave-corpuscle dichotomy. • A good agreement between numerical simulations and experimental data is found without appealing to best-fit procedures.« less

  2. Adequate model complexity for scenario analysis of VOC stripping in a trickling filter.

    PubMed

    Vanhooren, H; Verbrugge, T; Boeije, G; Demey, D; Vanrolleghem, P A

    2001-01-01

    Two models describing the stripping of volatile organic contaminants (VOCs) in an industrial trickling filter system are developed. The aim of the models is to investigate the effect of different operating conditions (VOC loads and air flow rates) on the efficiency of VOC stripping and the resulting concentrations in the gas and liquid phases. The first model uses the same principles as the steady-state non-equilibrium activated sludge model Simple Treat, in combination with an existing biofilm model. The second model is a simple mass balance based model only incorporating air and liquid and thus neglecting biofilm effects. In a first approach, the first model was incorporated in a five-layer hydrodynamic model of the trickling filter, using the carrier material design specifications for porosity, water hold-up and specific surface area. A tracer test with lithium was used to validate this approach, and the gas mixing in the filters was studied using continuous CO2 and O2 measurements. With the tracer test results, the biodegradation model was adapted, and it became clear that biodegradation and adsorption to solids can be neglected. On this basis, a simple dynamic mass balance model was built. Simulations with this model reveal that changing the air flow rate in the trickling filter system has little effect on the VOC stripping efficiency at steady state. However, immediately after an air flow rate change, quite high flux and concentration peaks of VOCs can be expected. These phenomena are of major importance for the design of an off-gas treatment facility.

  3. Simple models for the simulation of submarine melt for a Greenland glacial system model

    NASA Astrophysics Data System (ADS)

    Beckmann, Johanna; Perrette, Mahé; Ganopolski, Andrey

    2018-01-01

    Two hundred marine-terminating Greenland outlet glaciers deliver more than half of the annually accumulated ice into the ocean and have played an important role in the Greenland ice sheet mass loss observed since the mid-1990s. Submarine melt may play a crucial role in the mass balance and position of the grounding line of these outlet glaciers. As the ocean warms, it is expected that submarine melt will increase, potentially driving outlet glaciers retreat and contributing to sea level rise. Projections of the future contribution of outlet glaciers to sea level rise are hampered by the necessity to use models with extremely high resolution of the order of a few hundred meters. That requirement in not only demanded when modeling outlet glaciers as a stand alone model but also when coupling them with high-resolution 3-D ocean models. In addition, fjord bathymetry data are mostly missing or inaccurate (errors of several hundreds of meters), which questions the benefit of using computationally expensive 3-D models for future predictions. Here we propose an alternative approach built on the use of a computationally efficient simple model of submarine melt based on turbulent plume theory. We show that such a simple model is in reasonable agreement with several available modeling studies. We performed a suite of experiments to analyze sensitivity of these simple models to model parameters and climate characteristics. We found that the computationally cheap plume model demonstrates qualitatively similar behavior as 3-D general circulation models. To match results of the 3-D models in a quantitative manner, a scaling factor of the order of 1 is needed for the plume models. We applied this approach to model submarine melt for six representative Greenland glaciers and found that the application of a line plume can produce submarine melt compatible with observational data. Our results show that the line plume model is more appropriate than the cone plume model for simulating the average submarine melting of real glaciers in Greenland.

  4. Representation of limb kinematics in Purkinje cell simple spike discharge is conserved across multiple tasks.

    PubMed

    Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J

    2011-11-01

    Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.

  5. Prediction of the dollar to the ruble rate. A system-theoretic approach

    NASA Astrophysics Data System (ADS)

    Borodachev, Sergey M.

    2017-07-01

    Proposed a simple state-space model of dollar rate formation based on changes in oil prices and some mechanisms of money transfer between monetary and stock markets. Comparison of predictions by means of input-output model and state-space model is made. It concludes that with proper use of statistical data (Kalman filter) the second approach provides more adequate predictions of the dollar rate.

  6. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  7. iACP-GAEnsC: Evolutionary genetic algorithm based ensemble classification of anticancer peptides by utilizing hybrid feature space.

    PubMed

    Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad

    2017-06-01

    Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Long-term Evaluation of Landuse Changes On Landscape Water Balance - A Case Study From North-east Germany

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    In this paper, long-term effects of different afforestation scenarios on landscape wa- ter balance will be analyzed taking into account the results of a regional case study. This analysis is based on using a GIS-coupled simulation model for the the spatially distributed calculation of water balance.For this purpose, the modelling system THE- SEUS with a simple GIS-interface will be used. To take into account the special case of change in forest cover proportion, THESEUS was enhanced with a simple for- est growth model. In the regional case study, model runs will be performed using a detailed spatial data set from North-East Germany. This data set covers a mesoscale catchment located at the moraine landscape of North-East Germany. Based on this data set, the influence of the actual landuse and of different landuse change scenarios on water balance dynamics will be investigated taking into account the spatial distributed modelling results from THESEUS. The model was tested using different experimen- tal data sets from field plots as well as obsverded catchment discharge. Additionally to such convential validation techniques, remote sensing data were used to check the simulated regional distribution of water balance components like evapotranspiration in the catchment.

  9. TRIGRS - A Fortran Program for Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis, Version 2.0

    USGS Publications Warehouse

    Baum, Rex L.; Savage, William Z.; Godt, Jonathan W.

    2008-01-01

    The Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model (TRIGRS) is a Fortran program designed for modeling the timing and distribution of shallow, rainfall-induced landslides. The program computes transient pore-pressure changes, and attendant changes in the factor of safety, due to rainfall infiltration. The program models rainfall infiltration, resulting from storms that have durations ranging from hours to a few days, using analytical solutions for partial differential equations that represent one-dimensional, vertical flow in isotropic, homogeneous materials for either saturated or unsaturated conditions. Use of step-function series allows the program to represent variable rainfall input, and a simple runoff routing model allows the user to divert excess water from impervious areas onto more permeable downslope areas. The TRIGRS program uses a simple infinite-slope model to compute factor of safety on a cell-by-cell basis. An approximate formula for effective stress in unsaturated materials aids computation of the factor of safety in unsaturated soils. Horizontal heterogeneity is accounted for by allowing material properties, rainfall, and other input values to vary from cell to cell. This command-line program is used in conjunction with geographic information system (GIS) software to prepare input grids and visualize model results.

  10. Creep and stress relaxation modeling of polycrystalline ceramic fibers

    NASA Technical Reports Server (NTRS)

    Dicarlo, James A.; Morscher, Gregory N.

    1994-01-01

    A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.

  11. Evidence for Model-based Computations in the Human Amygdala during Pavlovian Conditioning

    PubMed Central

    Prévost, Charlotte; McNamee, Daniel; Jessup, Ryan K.; Bossaerts, Peter; O'Doherty, John P.

    2013-01-01

    Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference. PMID:23436990

  12. Estimation and applications of size-based distributions in forestry

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...

  13. Nondestructive assessment of timber bridges using a vibration-based method

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  14. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  15. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  16. Accuracy analysis of automodel solutions for Lévy flight-based transport: from resonance radiative transfer to a simple general model

    NASA Astrophysics Data System (ADS)

    Kukushkin, A. B.; Sdvizhenskii, P. A.

    2017-12-01

    The results of accuracy analysis of automodel solutions for Lévy flight-based transport on a uniform background are presented. These approximate solutions have been obtained for Green’s function of the following equations: the non-stationary Biberman-Holstein equation for three-dimensional (3D) radiative transfer in plasma and gases, for various (Doppler, Lorentz, Voigt and Holtsmark) spectral line shapes, and the 1D transport equation with a simple longtailed step-length probability distribution function with various power-law exponents. The results suggest the possibility of substantial extension of the developed method of automodel solution to other fields far beyond physics.

  17. Collector modulation in high-voltage bipolar transistor in the saturation mode: Analytical approach

    NASA Astrophysics Data System (ADS)

    Dmitriev, A. P.; Gert, A. V.; Levinshtein, M. E.; Yuferev, V. S.

    2018-04-01

    A simple analytical model is developed, capable of replacing the numerical solution of a system of nonlinear partial differential equations by solving a simple algebraic equation when analyzing the collector resistance modulation of a bipolar transistor in the saturation mode. In this approach, the leakage of the base current into the emitter and the recombination of non-equilibrium carriers in the base are taken into account. The data obtained are in good agreement with the results of numerical calculations and make it possible to describe both the motion of the front of the minority carriers and the steady state distribution of minority carriers across the collector in the saturation mode.

  18. Ozone indices based on simple meteorological parameters: potentials and limitations of regression and neural network models

    NASA Astrophysics Data System (ADS)

    Soja, G.; Soja, A.-M.

    This study tested the usefulness of extremely simple meteorological models for the prediction of ozone indices. The models were developed with the input parameters of daily maximum temperature and sunshine duration and are based on a data collection period of three years. For a rural environment in eastern Austria, the meteorological and ozone data of three summer periods have been used to develop functions to describe three ozone exposure indices (daily maximum, 7 h mean 9.00-16.00 h, accumulated ozone dose AOT40). Data sets for other years or stations not included in the development of the models were used as test data to validate the performance of the models. Generally, optimized regression models performed better than simplest linear models, especially in the case of AOT40. For the description of the summer period from May to September, the mean absolute daily differences between observed and calculated indices were 8±6 ppb for the maximum half hour mean value, 6±5 ppb for the 7 h mean and 41±40 ppb h for the AOT40. When the parameters were further optimized to describe individual months separately, the mean absolute residuals decreased by ⩽10%. Neural network models did not always perform better than the regression models. This is attributed to the low number of inputs in this comparison and to the simple architecture of these models (2-2-1). Further factorial analyses of those days when the residuals were higher than the mean plus one standard deviation should reveal possible reasons why the models did not perform well on certain days. It was observed that overestimations by the models mainly occurred on days with partly overcast, hazy or very windy conditions. Underestimations more frequently occurred on weekdays than on weekends. It is suggested that the application of this kind of meteorological model will be more successful in topographically homogeneous regions and in rural environments with relatively constant rates of emission and long-range transport of ozone precursors. Under conditions too demanding for advanced physico/chemical models, the presented models may offer useful alternatives to derive ecologically relevant ozone indices directly from meteorological parameters.

  19. Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment

    NASA Astrophysics Data System (ADS)

    Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz

    2017-04-01

    Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.

  20. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    NASA Astrophysics Data System (ADS)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and in four out of five physiographic regions.

  1. Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

    NASA Astrophysics Data System (ADS)

    Halbrügge, Marc

    2010-12-01

    This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

  2. Development of feedforward receptive field structure of a simple cell and its contribution to the orientation selectivity: a modeling study.

    PubMed

    Garg, Akhil R; Obermayer, Klaus; Bhaumik, Basabi

    2005-01-01

    Recent experimental studies of hetero-synaptic interactions in various systems have shown the role of signaling in the plasticity, challenging the conventional understanding of Hebb's rule. It has also been found that activity plays a major role in plasticity, with neurotrophins acting as molecular signals translating activity into structural changes. Furthermore, role of synaptic efficacy in biasing the outcome of competition has also been revealed recently. Motivated by these experimental findings we present a model for the development of simple cell receptive field structure based on the competitive hetero-synaptic interactions for neurotrophins combined with cooperative hetero-synaptic interactions in the spatial domain. We find that with proper balance in competition and cooperation, the inputs from two populations (ON/OFF) of LGN cells segregate starting from the homogeneous state. We obtain segregated ON and OFF regions in simple cell receptive field. Our modeling study supports the experimental findings, suggesting the role of synaptic efficacy and the role of spatial signaling. We find that using this model we obtain simple cell RF, even for positively correlated activity of ON/OFF cells. We also compare different mechanism of finding the response of cortical cell and study their possible role in the sharpening of orientation selectivity. We find that degree of selectivity improvement in individual cells varies from case to case depending upon the structure of RF field and type of sharpening mechanism.

  3. PlayDoh and Toothpicks and Gummy Bears... OH MY, They're Models!

    NASA Astrophysics Data System (ADS)

    Kolandaivelu, K. P.; Wilson, M. W.; Glesener, G. B.

    2017-12-01

    Simple, everyday items found around the house are often used in geoscience lab activities. Gummy bears and silly putty can model the bending and breaking behaviour of rocks; shaking buildings during an earthquake can be modeled with some Jello, toothpicks, and marshmallows; PlayDoh can be used to demonstrate layers of sedimentary rocks; and even plumbing pipes filled with pebbles and playground sand become miniature physical models of aquifers. When performed correctly, these activities can help students visualize geoscience phenomena or increase students' motivation to pay attention in class, but how do these activities help students develop ways to think like a scientist? "Developing and using models" is one of the important science and engineering practices recommended in the Next Generation Science Standards (NGSS). In this presentation, we will demonstrate a variety of common geoscience lab activities using simple, everyday household items in order to describe ways instructors can help their students develop model-based reasoning skills. Specific areas of interest will be on identifying positive and negative attributes of a model, ways to evaluate the reliability of a model, and how a model can be revised to improve its outcome. We will also outline other kinds of models that can be generated from these lab activities, such as mathematical, graphical, and verbal models. Our goal is to encourage educators to focus more time on helping students develop model-based reasoning skills, which can be used in almost all aspects of everyday life.

  4. A simple rainfall-runoff model based on hydrological units applied to the Teba catchment (south-east Spain)

    NASA Astrophysics Data System (ADS)

    Donker, N. H. W.

    2001-01-01

    A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.

  5. The induced electric field due to a current transient

    NASA Astrophysics Data System (ADS)

    Beck, Y.; Braunstein, A.; Frankental, S.

    2007-05-01

    Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.

  6. Neuropsychological study of FASD in a sample of American Indian children: processing simple versus complex information.

    PubMed

    Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A

    2008-12-01

    Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.

  7. Influence of backup bearings and support structure dynamics on the behavior of rotors with active supports

    NASA Technical Reports Server (NTRS)

    Flowers, George T.

    1994-01-01

    Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.

  8. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  9. Model for large magnetoresistance effect in p–n junctions

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Yang, Dezheng; Si, Mingsu; Shi, Huigang; Xue, Desheng

    2018-06-01

    We present a simple model based on the classic Shockley model to explain the magnetotransport in nonmagnetic p–n junctions. Under a magnetic field, the evaluation of the carrier to compensate Lorentz force establishes the necessary space-charge region distribution. The calculated current–voltage (I–V) characteristics under various magnetic fields demonstrate that the conventional nonmagnetic p–n junction can exhibit an extremely large magnetoresistance effect, which is even larger than that in magnetic materials. Because the large magnetoresistance effect that we discussed is based on the conventional p–n junction device, our model provides new insight into the development of semiconductor magnetoelectronics.

  10. Mixed-order phase transition in a minimal, diffusion-based spin model.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  11. A dynamical system of deposit and loan volumes based on the Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Sumarti, N.; Nurfitriyana, R.; Nurwenda, W.

    2014-02-01

    In this research, we proposed a dynamical system of deposit and loan volumes of a bank using a predator-prey paradigm, where the predator is loan volumes, and the prey is deposit volumes. The existence of loan depends on the existence of deposit because the bank will allocate the loan volume from a portion of the deposit volume. The dynamical systems have been constructed are a simple model, a model with Michaelis-Menten Response and a model with the Reserve Requirement. Equilibria of the systems are analysed whether they are stable or unstable based on their linearised system.

  12. Optimal quality control of bakers' yeast fed-batch culture using population dynamics.

    PubMed

    Dairaku, K; Izumoto, E; Morikawa, H; Shioya, S; Takamatsu, T

    1982-12-01

    An optimal quality control policy for the overall specific growth rate of bakers' yeast, which maximizes the fermentative activity in the making of bread, was obtained by direct searching based on the mathematical model proposed previously. The mathematical model had described the age distribution of bakers' yeast which had an essential relationship to the ability of fermentation in the making of bread. The mathematical model is a simple aging model with two periods: Nonbudding and budding. Based on the result obtained by direct searching, the quality control of bakers' yeast fed-batch culture was performed and confirmed to be experimentally valid.

  13. Predicting the stability of nanodevices

    NASA Astrophysics Data System (ADS)

    Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.

    2011-05-01

    A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.

  14. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  15. Concept of Fractal Dimension use of Multifractal Cloud Liquid Models Based on Real Data as Input to Monte Carlo Radiation Models

    NASA Technical Reports Server (NTRS)

    Wiscombe, W.

    1999-01-01

    The purpose of this paper is discuss the concept of fractal dimension; multifractal statistics as an extension of this; the use of simple multifractal statistics (power spectrum, structure function) to characterize cloud liquid water data; and to understand the use of multifractal cloud liquid water models based on real data as input to Monte Carlo radiation models of shortwave radiation transfer in 3D clouds, and the consequences of this in two areas: the design of aircraft field programs to measure cloud absorptance; and the explanation of the famous "Landsat scale break" in measured radiance.

  16. Simple Peer-to-Peer SIP Privacy

    NASA Astrophysics Data System (ADS)

    Koskela, Joakim; Tarkoma, Sasu

    In this paper, we introduce a model for enhancing privacy in peer-to-peer communication systems. The model is based on data obfuscation, preventing intermediate nodes from tracking calls, while still utilizing the shared resources of the peer network. This increases security when moving between untrusted, limited and ad-hoc networks, when the user is forced to rely on peer-to-peer schemes. The model is evaluated using a Host Identity Protocol-based prototype on mobile devices, and is found to provide good privacy, especially when combined with a source address hiding scheme. The contribution of this paper is to present the model and results obtained from its use, including usability considerations.

  17. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  18. The predictive power of zero intelligence in financial markets

    NASA Astrophysics Data System (ADS)

    Farmer, J. Doyne; Patelli, Paolo; Zovko, Ilija I.

    2005-02-01

    Standard models in economics stress the role of intelligent agents who maximize utility. However, there may be situations where constraints imposed by market institutions dominate strategic agent behavior. We use data from the London Stock Exchange to test a simple model in which minimally intelligent agents place orders to trade at random. The model treats the statistical mechanics of order placement, price formation, and the accumulation of revealed supply and demand within the context of the continuous double auction and yields simple laws relating order-arrival rates to statistical properties of the market. We test the validity of these laws in explaining cross-sectional variation for 11 stocks. The model explains 96% of the variance of the gap between the best buying and selling prices (the spread) and 76% of the variance of the price diffusion rate, with only one free parameter. We also study the market impact function, describing the response of quoted prices to the arrival of new orders. The nondimensional coordinates dictated by the model approximately collapse data from different stocks onto a single curve. This work is important from a practical point of view, because it demonstrates the existence of simple laws relating prices to order flows and, in a broader context, suggests there are circumstances where the strategic behavior of agents may be dominated by other considerations. double auction market | market microstructure | agent-based models

  19. Understanding the complex dynamics of stock markets through cellular automata

    NASA Astrophysics Data System (ADS)

    Qiu, G.; Kandhai, D.; Sloot, P. M. A.

    2007-04-01

    We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.

  20. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

Top